Simplifying Data Ingestion with Copy job – Incremental Copy GA, Lakehouse Upserts, and New Connectors
Copy job has been a go-to tool for simplified data ingestion in Microsoft Fabric, offering a seamless data movement experience from any source to any destination. Whether you need batch or incremental copying, it provides the flexibility to meet diverse data needs while maintaining a simple and intuitive workflow.
We continuously refine Copy job based on customer feedback, enhancing both functionality and user experience. In this update, we’re introducing several key improvements designed to streamline your workflow and boost efficiency.
Incremental Copy Now Generally Available
Incremental copy is one of the most popular features in Copy job, significantly improving efficiency by transferring only new or changed data—saving time and resources with minimal manual effort. When you select incremental copy, the first run performs a full data copy, and subsequent runs move only the changes.
- For databases, this means only new or updated rows are transferred. If CDC is enabled, inserted, updated, and deleted rows are included.
- For storage sources, only files with a newer LastModifiedTime are copied.
With the general availability of incremental copy in Copy job, you can feel free to use that in any production environment. Along with this release, a new meter: Data Movement – Incremental Copy will take effect with a consumption rate of 3 CU, during which only delta data is moved using a more efficient approach that largely reduces the processing time. The Full/Batch Copy functionality will continue to emit usage on the existing meter: Data Movement with a consumption rate of 1.5 CU. You can see more details in Pricing for copy job – Microsoft Fabric | Microsoft Learn
Upsert Data to Fabric Lakehouse table and other data stores
You can now choose to merge data directly into more destination stores including Fabric Lakehouse table, Salesforce, Salesforce Service Cloud, Dataverse, Dynamics 365, Dynamics CRM, and Azure Cosmos DB for NoSQL. These give you further flexibility to tailor data ingestion to your specific needs.

More connectors, more possibilities
More source and destination connections are now available, giving you greater flexibility for data ingestion with Copy job. We’re not stopping here—even more connectors are coming soon!
- SFTP
- FTP
- IBM Db2 database
- Oracle Cloud Storage
- Dataverse
- HTTP
- Dynamics 365
- Dynamics CRM
- Azure Cosmos DB for NoSQL
- Azure Files
- Azure Tables
- ServiceNow
- Vertica
- MariaDB

You can see more details at Supported connectors in Copy job.
Copying Data into Snowflake and Fabric Data Warehouse from On-Premises Now Works
Previously, when trying to copy data from on-premises data stores into data warehouses like Snowflake or Fabric Data Warehouse, the Copy job UI would indicate that this scenario was not yet supported. That limitation has now been removed — it now works!
The improvement comes from native support for staging copy. Behind the scenes, data is first copied from the on-premises source (via Data Gateway) to staging storage in Fabric OneLake, where it is automatically shaped to meet the format requirements of the COPY statement used by Snowflake and Fabric Data Warehouse. Then, the COPY statement is invoked to load the data from staging into the target warehouse — delivering a seamless, end-to-end data movement experience.
With this enhancement, previously unsupported scenarios—such as copying data from on-premises sources to warehouse destinations like Snowflake or Fabric Data Warehouse—are now fully supported, with no manual intervention required.
Additional Resources
To learn more, explore our Microsoft Fabric Copy Job Documentation.
Submit your feedback on Fabric Ideas and join the conversation on the Fabric Community.
To get into the technical details, head over to the Fabric documentation.
Have any questions or feedback? Leave a comment below!