Microsoft Fabric Updates Blog

Mirroring for SQL Server in Microsoft Fabric (Preview)

In today’s AI driven world, analytics platforms are only as good as their data. With the ever-increasing amount of data being collected in various applications, databases, and data warehouses in an enterprise, managing and ingesting data into a central platform for purposes of analytics and AI is a cumbersome and costly process. Databases and data warehouses use proprietary storage formats making the ability to create shortcuts to their data impossible. Data needs to be extracted, transformed, normalized, and made available in a central place for analytics. Even when this is implemented, data is not real-time making any insights stale pretty quicky resulting in users having to query the data in the source.

Mirroring provides a modern way of accessing and ingesting data continuously and seamlessly from any database or data warehouse into OneLake in Microsoft Fabric. This is all in near real-time thus giving users immediate access to changes in the source!

Today we are thrilled to announce that Mirroring for SQL Server in Fabric for all in-market versions of SQL Server from SQL Server 2016 to SQL Server 2022 is in preview.

Additionally, with the preview announcement of SQL Server 2025, we are also excited to announce the preview of Mirroring for SQL Server 2025 in Fabric.

Let’s take a look at the capabilities for each of these previews.

Mirroring in Fabric from any of your SQL Server sources ensures that your source transactional SQL Server database is always up to date and available in the Fabric OneLake, providing a solid foundation for reporting, advanced analytics, AI, and data science. There is no complex setup or ETL for Mirroring. You setup the mirror from Fabric Portal by providing the SQL Server and database connection details, provide selections on what needs mirrored into Fabric, either all data or user selected eligible mirrored tables. And, just like that mirroring is ready to go. Mirroring SQL Server database creates an initial snapshot in Fabric OneLake after which data is kept in sync in near-real time with every transaction when a new table is created/dropped, or data gets updated.

Figure: diagram depicting mirroring from various SQL sources to Fabric OneLake

Mirroring for SQL Server (2016-2022) in Microsoft Fabric

Mirroring for SQL Server to Fabric from these SQL Server versions relies on the Change Data Capture (CDC) technology available in SQL Server. CDC captures an initial snapshot of all the tables selected for mirroring and there after replicate the changes. Additionally, on-premises data gateway (OPDG) is required to be installed in your SQL Server environment. The mirroring services connects to OPDG to read the initial snapshot as well as the changes and pulls the data into OneLake and converts into an analytics-ready format in Fabric.

Figure: High level architecture diagram for mirroring from SQL Server 2016-2022 to Fabric.

For detailed steps (including pre-requisites) to configure, and monitor mirroring from SQL Server to Fabric, refer to the Mirrored SQL Server documentation.

SQL Server 2022 mirroring setup and replication in action:

Mirroring for SQL Server 2025 in Fabric

While the main functionality and experience stays the same as above, mirroring from SQL Server 2025 uses change feed instead of Change Data Capture. This is the same technology used in mirroring for Azure SQL in Fabric. In this version, SQL Server keeps track and replicates the initial snapshot and changes to the landing zone in OneLake which is then converted to an analytics-ready format by the mirroring engine in Fabric. On-premises data gateway is primarily used as a control plane to connect and authenticate your on-premises environment to Fabric. Arc Agent is required for outbound authentication from SQL Server to Fabric.

SQL Server 2025 mirroring setup and replication in action:

A screenshot of a computer

AI-generated content may be incorrect.

For detailed steps (including pre-requisites) to configure, monitor and troubleshooting mirroring from SQL Server 2025 data to Fabric, refer to the Mirrored SQL Server documentation.

The table below summarizes the differences between various SQL sources when mirroring to Fabric.

SQL Server 2016-2022 SQL Server 2025 Azure SQL
Capture incremental changes Use “Change Data Capture (CDC)” Use “Change Feed” method Use “Change Feed” method
Uses Arc Agent Not required Arc Agent provide System managed identity for outbound authentication Uses System managed identity auto created for Azure SQL
SQL Server Agent CDC relies on SQL Server Agent for key functions of change captures Not required Not required
On- premises Data Gateway (OPDG) OPDG writes data into OneLake OPDG is control and command OPDG is required only when Azure SQL is configured in private network.
SQL Server directly writes to OneLake

From here on, the mirrored data in the delta format is ready for immediate consumption across all Fabric experiences and features like Power BI with new Direct Lake mode, Data Warehouse, Data Engineering, Lakehouse, KQL Database, Notebooks and copilots work instantly.

Resources:

What’s new with Mirroring at Microsoft Build 2025 – Mirroring in Fabric – What’s new

Try out Mirroring in Fabric, sign up for a free trial and get started.

Download the SQL Server 2025 preview.

Related blog posts

Mirroring for SQL Server in Microsoft Fabric (Preview)

July 17, 2025 by Xu Jiang

We are pleased to announce that Eventstream’s Confluent Cloud for Apache Kafka streaming connector now supports decoding data from Confluent Cloud for Apache Kafka topics that are associated with a data contract in Confluent Schema Registry. The Challenge with Schema Registry Encoded Data The Confluent Schema Registry serves as a centralized service for managing and … Continue reading “Decoding Data with Confluent Schema Registry Support in Eventstream (Preview)”

July 17, 2025 by Xu Jiang

Introducing multiple-schema inferencing in Eventstream! This feature empowers you to work seamlessly with data sources that emit varying schemas by inferring and managing multiple schemas simultaneously. It eliminates the limitations of single-schema inferencing by enabling more accurate and flexible transformations, preventing field mismatches when switching between Live and Edit modes, and allowing you to view … Continue reading “Enhancing Data Transformation Flexibility with Multiple-Schema Inferencing in Eventstream (Preview)”