Data Factory Announcements at Ignite 2024 Recap
A couple of weeks ago we had such an exciting week for Fabric during the Ignite Conference, filled with several product announcements and sneak previews of upcoming new features.
Thanks to all of you who participated in the conference, either in person or by being part of the many virtual conversations through blogs, Community forums, social media and other channels. Thank you also for all your product feedback and Ideas forum suggestions that help us defining the next wave of product enhancements.
We wanted to make sure you didn’t miss any of the Data Factory in Fabric announcements, by providing you with this recap of all new features:
- Dataflow Gen2 CI/CD & Public APIs support
- Copilot for Data Pipelines
- Import/Export Data Pipelines
- Semantic Model Refresh Activity enhancements
- Fabric SQL Database connector as a source & sink in Dataflow Gen2 and Data Pipeline
- New ServiceNow & MariaDB connectors in Data Pipelines
- Iceberg format support in ADLS Gen2 Connector for Data Pipelines
- Open Mirroring
- Mirroring for Azure SQL DB (GA) and Azure SQL MI (Preview)
- Copy Job CI/CD, upsert and overwrite support
You can continue reading below for more information about each of these capabilities.
Dataflow Gen2 CI/CD & Public APIs support
We are delighted to share that CI/CD and GIT integration support for Dataflow Gen2 will be available in preview by January 2025!
With this new set of features, you will be able to seamlessly integrate your Dataflow Gen2 artifacts with your existing CI/CD pipelines and version control of your workspace in Fabric. This integration allows for better collaboration, versioning, and automation of your deployment process across dev, test, and production environments.
Learn more about these new capabilities: Dataflow Gen2 CI/CD and GIT source control integration are now in preview! | Microsoft Fabric Blog | Microsoft Fabric
Copilot for Data Pipelines
The new Data pipeline capabilities in Copilot for Data Factory are now available in preview. These features function as an AI expert to help users build, troubleshoot, and maintain data pipelines.
What can new capabilities in Copilot for Data Factory do for you?
- Understand your business intent and effortlessly translate it into data pipeline activities to build your data integration solutions.
- Provide summary with clear explanation for you to better understand complex data pipeline in data integration solutions created by other members.
- Troubleshoot Data pipeline error messages with clear and actionable summary and recommendations.
Learn more about these new Data Pipeline capabilities in Copilot for Data Factory: Efficiently build and maintain your Data pipelines with Copilot for Data Factory: new capabilities and experiences | Microsoft Fabric Blog | Microsoft Fabric
Import/Export Data Pipelines
As a Data Factory pipeline developer, you will often want to export your pipeline definition to share it with other developers or to reuse it in other workspaces.
We’ve now added the capability to export and import your Data Factory pipelines from your Fabric workspace. This powerful feature will enable even more collaborative capabilities and will be invaluable when you troubleshoot your pipelines with our support teams.
Just click the new export button on the pipeline canvas designer to download the JSON definition of your pipelines which you can then share with other users inside or outside of your organization. Your collaborators can then take that JSON pipeline definition and import it into their own workspace using the import button.

Export and Import Fabric Pipelines from the Data Factory Pipeline Designer
Semantic Model Refresh Activity enhancements
One of the most popular features that we built in Fabric Data Factory came from our customer patterns that we observed being used in ADF and from our community. That is the Semantic Model Refresh activity. After first releasing this pipeline activity, we heard your ask to improve your ELT pipeline processing by including an option to refresh specific tables and partitions in your semantic models. We are pleased to announce that we’ve now enabled this feature making the pipeline activity the most effective way to refresh your Fabric semantic models!
Fabric SQL Database connector as a source & sink in Dataflow Gen2 and Data Pipeline
Both Data Pipeline and Dataflow Gen 2 now natively support the Fabric SQL Database connector as both source and sink.
Now, you can directly connect to your Fabric SQL databases, allowing for smooth data movement and transformation across platforms without additional setup. Designed to streamline workflows, this connector lets you query data in real-time and leverage Fabric’s robust security, making it easier than ever to access, transform, and use your data.
New ServiceNow & MariaDB connectors in Data Pipelines
ServiceNow
With the new ServiceNow connector, Data Factory users can now extract and integrate data from ServiceNow seamlessly, bringing valuable service-related data into their analytics ecosystems. This connector makes it easy to create detailed reports, track performance metrics, and monitor workflow statuses, helping teams make data-informed decisions to improve operations and customer satisfaction.
Key Features
- Comprehensive Data Access: Allows users to retrieve various types of data from ServiceNow system.
- Native Query Builder: Supports the query builder experience which aligns the native condition builder experience in ServiceNow.
MariaDB
For teams leveraging open-source databases, the MariaDB connector is a welcome addition. MariaDB has long been celebrated for its reliability and performance, especially within organizations seeking open-source solutions for their database needs. With the new connector, Data Factory users can integrate, transform, and analyze data from MariaDB with ease, enhancing interoperability with Fabric’s robust data orchestration capabilities. Whether you’re aggregating data for reporting, migrating legacy systems, or synchronizing your database environments, the MariaDB connector bridges the gap effortlessly.
Key Features
Broad Compatibility: Connects seamlessly with MariaDB environments, supporting various use cases such as ETL, data migration, and integration into broader analytics platforms.
Flexible Querying: Offers direct querying capabilities to access, filter, and move data as needed within data pipeline.
Iceberg format support in ADLS Gen2 Connector for Data Pipelines
We’ve made a significant enhancement in Fabric Data Factory: Data pipeline can now write data in Iceberg format via the Azure Data Lake Storage (ADLS) Gen2 connector! This addition provides a powerful new option for users who need to manage and optimize large datasets with a high level of flexibility, reliability, and performance. Iceberg format support brings new efficiencies in how data is handled, transformed, and stored, enabling better performance and future scalability.
Open Mirroring
Mirroring in Fabric provides a modern way of accessing and ingesting your existing data estate continuously and seamlessly from any database or data warehouse into OneLake in Microsoft Fabric.
Open Mirroring in Fabric is designed to be extensible, customizable, and open. It is a powerful feature that extends Mirroring in Fabric based on open Delta Lake table format. This capability enables any applications and data ISVs to write change data directly into a Mirrored Database in Microsoft Fabric based on the open mirroring public Apis and approach.
Once the data lands in OneLake in Fabric, Open Mirroring simplifies the handling of complex data changes, ensuring that all mirrored data is continuously up-to-date and ready for analysis.
We are excited to see many of our industry-leading partners, streamlining delivery of mirroring solutions in Fabric with announcement to integrate their data solution into Open Mirroring. Our Open Mirroring partner ecosystem continues to grow with publicly available solutions from Striim, Oracle Golden Gate, and MongoDB, with DataStax’s solution coming soon.
Learn more: Introducing Open Mirroring in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric
Mirroring for Azure SQL DB (GA) and Azure SQL MI (Preview)
in addition to the exciting announcement of Open Mirroring in Microsoft Fabric is now publicly available, we are also proud to announce the general availability of Mirroring for Azure SQL DB and Mirroring for Azure SQL MI in preview.

Available database mirroring in Microsoft Fabric and future sources that are coming soon.
Together with Mirroring for Snowflake, Azure SQL DB, Azure Cosmos DB, Azure SQL MI and many more sources to be added, you can leverage the same mirroring technology and trivial setup to automatically reflect your data estate into OneLake in Microsoft Fabric.
Copy Job CI/CD, upsert and overwrite support
Copy Job simplifies data ingestion, providing a seamless experience from any source to any destination. Whether you need batch or incremental copying, Copy Job provides the flexibility to meet your data needs while keeping things simple and intuitive.
Since the Public Preview launch at FabCon Europe in late September, we’ve been rapidly enhancing Copy Job with powerful new features. Here’s our latest update:
- Copy Job now supports CI/CD capabilities in Fabric, including Git integration for source control and ALM Deployment Pipelines. Check out the details in CI/CD for copy job in Data Factory – Microsoft Fabric | Microsoft Learn.
- Copy Job now also offers expanded writing options: Upsert functionality for SQL DB and SQL Server, and an Overwrite option for Fabric Lakehouse—bringing added flexibility and control to data movement. Check out the details in What is Copy job (preview) in Data Factory.
Thank You for your feedback, keep it coming!
We wanted to thank you for your support, usage, excitement, and feedback around Data Factory in Fabric. We’re very excited to continue learning from you regarding your Data Integration needs and how Data Factory in Fabric can be enhanced to empower you to achieve more with data.
Please continue to share your feedback and feature ideas with us via our official Community channels, and stay tuned to our public roadmap page for updates on what will come next: