Microsoft Fabric Updates Blog

Simplifying Data Ingestion with Copy job – Replicate data from Dataverse through Fabric to multiple destinations

Copy job is the recommended approach in Microsoft Fabric Data Factory for moving data from any sources to any destinations in a simplified and efficient way—whether you’re transferring data across clouds, from on-premises systems, or between services. With native support for multiple delivery patterns, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy job provides the flexibility to handle a wide range of data movement scenarios, all through an intuitive and easy-to-use experience.

Copy job already supports a list of CDC-enabled sources and destinations, allowing changed data—including inserts, updates, and deletions—to be automatically captured and replicated to supported targets. You can get more details in Change data capture (CDC) in Copy Job – Microsoft Fabric | Microsoft Learn.

For source systems that Copy job does not yet natively support for CDC, such as Dataverse, you can still achieve CDC replication by using Copy job together with Fabric Link. Copy job becomes essential when you need to move data across regions or tenants, or when you want to replicate Dataverse data to multiple destinations beyond Fabric.

This blog provides step-by-step guidance on how to use Copy job to replicate data—including inserts, updates, and deletions—from Dataverse through Fabric to multiple destinations.

How it works

Prerequisites 

Your Dynamics Finance & Operations (F&O) ERP environment is always linked to an associated Dataverse environment. You can view this mapping in the Power Platform Admin Center, where you can check which F&O environment is connected to which Dataverse environment.

  1. Go to the Power Apps maker portal (https://make.powerapps.com), and select “Link to Microsoft Fabric” to replicate data from your Dataverse environment to a Fabric Lakehouse.

2. Enter the required connection information to connect Dataverse to a Fabric Lakehouse.

You can either create a new workspace or use an existing one in Microsoft Fabric, and a Fabric Lakehouse will be created in the selected workspace.

3. After you click “Review and Create,” a Fabric Lakehouse will be provisioned.

4. Select the tables from Dataverse and F&O that you want to synchronize to Fabric.

5. Access your newly created Fabric Lakehouse by clicking “View in Microsoft Fabric.”

6. Now, go to Microsoft Fabric (https://app.powerbi.com/) to create a Copy job to replicate the same data to multiple destinations.

7. From the creation wizard of Copy job, select the Fabric Lakehouse you just created via the Fabric Link (in step 3) as data source, and then choose any tables from the Lakehouse that you want to copy from.

8. Select the destination where you want to copy the data to. To replicate all changes—including inserts, updates, and deletions—choose a CDC-supported destination.

You can get the concrete supported connector list from Change data capture (CDC) in Copy Job – Microsoft Fabric | Microsoft Learn. For example, you can replicate data into an Azure SQL Database.

9. Select incremental copy.

10. Choose an update method to merge data to your destination.

11. After completing the creation wizard, a new Copy job will be created. When you run it, the first run will perform an initial full copy, and subsequent runs will replicate changes, including inserts, updates, and deletions.

Summary

You can easily use Copy job from Fabric Data Factory, via the Fabric Link, to replicate data—including inserts, updates, and deletions—from Dataverse into multiple destinations. This also enables replicate data from Dataverse across different tenants or regions.

Additional Resources

To learn more, explore Microsoft Fabric Copy job documentation.

Submit your feedback on Fabric Ideas and join the conversation in the Fabric Community.

If you have a question or want to share your feedback, please leave us a comment below!

Related blog posts

Simplifying Data Ingestion with Copy job – Replicate data from Dataverse through Fabric to multiple destinations

December 18, 2025 by Connie Xu

The Spark job definition activity in Microsoft Fabric Data Factory pipelines now supports connection property, unlocking a more secure and production-ready way to run your SJDs. What’s New? With this update, you can configure Notebook activities to run as Service Principal (SPN) or Workspace Identity (WI). These authentication methods are our recommended approach for production … Continue reading “Run Spark Job Definitions in Pipelines with Service Principal or Workspace Identity”

December 18, 2025 by Leo Li

Here is the December 2025 release of the on-premises data gateway (version 3000.298).