Fabric Data Factory: What’s New and Latest Roadmap
Fabric Data Factory enables every organization to tackle the complexities of ingesting data, applying data transformations, and orchestrating these different data-related activities, needed to support every organization’s data integration needs, and delivering modern data management architectures.

Thank you to all customers and partners who shared their Ideas, and suggestions, as we work together on shaping the future of data integration in the era of AI. Amongst the many Ideas that have been submitted, we have worked on many of them and delivered many product improvements ant features based on these Ideas. We could not have done it without you!
Data Integration for all your enterprise needs
One of the product feedback items that you have shared is using the secrets that are stored in Azure Key Vault (AKV), when connecting to different data sources. We are excited to share the private preview of support for AKV in Connections. With the support of AKV in connections, you are now able to use the secret stored in AKV when connecting to data sources.

As developers build metadata-driven data integration solutions, the support of parameters is important. Data Factory emphasizes the power of hyper-parameterized workflows to strengthen your data engineering processes. We are announcing the support of parameters for Dataflows and the Dataflow activity inside of pipelines.

As we continue to innovate in Data Factory for modern cloud-first analytics, we maintain our focus on enabling enterprise customers to organically incorporate critical IT requirements into their data integration projects in Fabric. DataOps is a critical function for our customers and we are super-excited to make these announcements:
- General availability for CI/CD capabilities in Data Factory that includes new variable library capabilities making it easy to change values between workspaces and environments in pipelines.
- In terms of securing your data access, we also introduced a sneak peek into Azure Key Vault (AKV) support for storing connection credentials.
- We have now made SPN auth available for pipeline CRUD APIs to secure your API apps without needing to use their user token when using the REST API.
Best-in-class connectivity and Data movement
Fabric Data Factory offers the best-in-class connectivity to 170+ data sources and data destinations. Whether you are doing data preparation, moving your data from different data sources to OneLake for analytics, or simply moving data between different data sources and destinations.
Whether you are moving data between Virtual Networks (VNet), on-premises, multi-cloud, Fabric Data Factory, Fabric Data Factory provides you with a rich set of options for moving your data securely, reliably, and with high-performance.
Disabling public access to critical data safeguards sensitive information and prevents unauthorized breaches, ensuring compliance and operational integrity. The Virtual Network Data Gateway in Microsoft Fabric supports this by securely connecting data sources to cloud services through a managed virtual network, eliminating exposure to public networks. We are thrilled to announce that Virtual Network Data Gateway now extends its support to Data pipelines, Copy jobs, and Mirroring, empowering users with enhanced security and seamless data integration across these functionalities.
We are announcing product improvements and features in the copy activity in pipelines including:
- Using Bulk API in Dataverse when connecting to Dataverse.
- When using Lakehouse connector to read delta tables, being able to use the information from the deletion vectors to exclude deleted records.
- Support for column mapping, and automated auto-creation of tables with new schemas.
- Performance improvements in Salesforce connector.
For more information about the product improvements and features refer to the FabCon blog post.
Mirroring in Fabric is a powerful turnkey replication feature that enables you to seamlessly reflect your existing data estate from any database or data warehouse into OneLake in Microsoft Fabric. Once Mirroring starts the replication process, the mirrored data is automatically kept up to date at near real-time in Fabric OneLake.
Connecting to databases with your most critical data, behind a firewall or on-premises, requires the right integration and approach to be effective. Thus, we are excited to announce Database Mirroring now supports connectivity to database behind a firewall and on-premises starting with Azure SQL Database, and soon with Snowflake and Azure SQL Managed Instance via On-Premises Data Gateway and Virtual Network Data Gateway. This amazing new capability enables database mirroring to replicate your data securely, reliably, and with high-performance into OneLake in Microsoft Fabric.

In addition, we are excited to share many product improvements and features for Mirroring in Fabric:
- Introducing Mirroring for Azure Database for PostgreSQL Flexible Server.
- Announcing General availability of Mirroring supporting CI/CD.
- Mirroring now supports Workspace monitoring.
- Mirroring supports replicating source schemas.
- Delta column mapping support with Mirroring.
- Open Mirroring UI enhancements and CSV file support.
- Mirroring for Azure SQL DB now supports tables without primary keys and reduced SQL roles.
- New regions expansion for Mirroring in Fabric.
Learn more about What’s new with Mirroring in Microsoft Fabric.
Copy job in Fabric Data Factory provides an intuitive and simplified experience, so that you can move data between data sources and destinations without creating any dataflow and data pipelines. It helps you get the data to where you need securely and reliably. We are excited to announce the general availability (GA) of Copy job, and following product improvements and features for Copy job in Fabric:
- New data sources with 20+ connectors
- Public API and CI/CD support
- VNET Data Gateway support
- Upsert to SQL database & Overwrite to Fabric Lakehouse tables
- Real-time monitoring with in-progress view

Join the Private Preview of Native Change Data Capture (CDC) in Copy Job from Microsoft Fabric Data Factory, enabling the capture and replication of inserts, updates, and deletes automatically to keep your data in sync across supported stores.
For a deeper dive into our Copy Job feature announcements, please refer to our complete Copy Job announcements blog post.
Enterprise-ready data transformation and orchestration
Dataflows Gen2 provides market-leading Data Transformation capabilities based on a low-code experience with enterprise scale for data ingestion and transformation, thanks to the Fast Copy and High-Scale Data Transformation engines in Fabric. At Fabric Conference, we are announcing new and enhanced key enterprise capabilities, including:
- Incremental Refresh optimizes dataflow refresh operations by updating only the data that has changed, rather than refreshing and reprocessing the entire data from the data source and is now generally available. This not only improves efficiency but also reduces the load on system resources, making data transformations more scalable and efficient. Learn more about this capability: Incremental refresh in Dataflow Gen2
- Several experience and functional improvements to provide a seamless authoring workflow for CI/CD-enabled Dataflow Gen2 customers, including Save experience enhancements (‘Save & Run’ vs. ‘Save’), multitasking UX to enable users to work on multiple Dataflow Gen2 and other Fabric artifacts simultaneously, and support for frequency-based scheduled refresh (‘refresh this dataflow every N minutes/hours/days’).
Pipeline users will be able to build new powerful metadata-driven patterns using variable libraries while also enjoying these amazing new features:
- Easily trigger pipelines upon receiving file events in OneLake with the new OneLake pipeline trigger as part of our general availability of pipeline triggers.
- Fabric User Data Functions is now publicly available as a preview in Fabric, supporting custom code modules. Data Factory pipelines now fully support your data engineers orchestrating their UDF code from a data pipeline using the Functions pipeline activity.
- SSIS users will now be able to lift-and-shift their SSIS packages into Fabric with the SSIS private preview which also now includes storing your packages in OneLake storage.
- If your Data Engineering team prefers to organize your data pipelines as Python-based DAGs using Apache Airflow, you’ll be excited to see our GA announcement for Apache Airflow job in Fabric Data Factory.
- We’ve expanded the limits of activities per pipeline at the request of many of our customers who build complex workflows using data pipelines. We now support up to 120 activities in a pipeline!
AI-powered development for data integration
Copilot and other generative AI features in preview bring new ways to transform and analyze data, generate insights, and create visualizations and reports in Microsoft Fabric and Power BI.
As part of Copilot in Fabric, we are excited to share the general availability of Copilot for Data Factory. With the GA of all copilot capabilities in Copilot for Data Factory, it brings together copilot capabilities that will empower you to build dataflows, data pipelines using text, as an input modality.

Learn more about what you can do with Copilot for Data Factory capabilities.
Upgrade pathways to Fabric Data Factory
One of the feedback items that we have been hearing is providing a pathway for you to upgrade your existing Azure Data Factory, Synapse pipelines, and Power BI Dataflows to Fabric Data Factory Modernizing your existing investments in Synapse, ADF, and Power BI is an area that we are continuously exploring ways to make easier.
We recently announced new documentation to provide roadmaps and guides for migrations and enable our ADF users to quickly and easily mount your factories directly into your Fabric workspace as an Azure Data Factory item. Learn more about our upgrade pathways from ADF & Synapse to Fabric Data Factory from this recent video.
Dataflow Gen2 significantly enhances the data ingestion and transformation capabilities available in Dataflow Gen1, with new features such as Output Destinations, Copilot, Fast Copy (for large-scale data ingestion), a new High-Scale Dataflows Transformation engine, VNET Gateway support, enhanced Refresh History and Diagnostics experiences and more. We have been working towards making it easier for you to upgrade your Dataflow Gen1 (also known as ‘Power BI Dataflows’) items as Dataflow Gen2 items. This Save As feature enables you to migrate from existing Dataflow Gen1 to Dataflow Gen2 easily.
Learn more about upgrading from Dataflow Gen1 to Dataflow Gen2 – Migrate from Dataflow Gen1 to Dataflow Gen2 – Microsoft Fabric | Microsoft Learn
Summary
Thank you for working with the Fabric Data Factory team and sharing your product feedback. Many of these product improvements and new capabilities are a result of listening to your feedback across different feedback channels, and interactions.
We can’t wait to learn from you on how you are using Fabric Data Factory. As we continue to bring exciting new innovations to the data integration space, we hope to hear feedback from you on our Microsoft Fabric Ideas site.