Microsoft Fabric Updates Blog

The November 2025 Fabric release introduces several major updates, including the general availability of SQL database, Cosmos DB, and enhanced mirroring support for key data sources such as SQL Server, Cosmos DB, and PostgreSQL.

This month also brings new AI-driven features like Copilot sidecar chat tools and real-time data exploration, as well as crucial platform enhancements such as Azure DevOps cross-tenant support, improved security permissions in OneLake, and expanded connectivity through new connectors and developer tooling. These updates are designed to empower users with greater flexibility, intelligence, and control across the Fabric platform.

Contents

Events and Announcements

Get Fabric certified for FREE during Fabric Data Days

Supercharge your career with 50+ days of data & AI. Join us for 2 months of learning, contests, live sessions, discount exam vouchers and community connection.

Through December 5, 2025, you can get your Fabric certification for free with a 100% discount voucher for exams DP-600 and DP-700. Exams must be taken by December 31, 2025.

Request your voucher now!

Your favorite Fabric Community Conference is back – with a twist!

SQLCon Joins FabCon from March 16-20, 2026, in Atlanta, GA! For the first time ever, SQLCon will be co-located with FabCon, bringing together the entire data community in one place.

Two Conferences. One pass. One Epic Week. Join us for the ultimate Microsoft Fabric, SQL, Power BI, Real-Time Intelligence, AI, and Databases community-led event.

Master SQL Server 2025 internals in the morning, dive into Fabric innovations in the afternoon, attend Power Hour before dinner and network with peers from both communities. The sessions you choose are totally up to you.

Register with code FABCOMM to save $200.

Fabric Platform

Govern in OneLake for Tenant admins (Preview)

In today’s data-driven world, effective data governance is crucial for ensuring the integrity, security, and usability of data. The governance experience for Fabric admins within the OneLake catalog is being extended. With this experience we’re empowering Fabric admins with the tools and insights they need to govern and secure their organizational data estate within Fabric.

Admins can now review and manage their tenant more efficiently by viewing all monitoring reports in one place with cross-filtering and drill-through features. With copilot, simply select the ‘Copilot’ icon in the ‘view more’ report to chat with your data, uncover trends, drill into details, and get quick summaries.

Learn more about OneLake catalog, Governance in OneLake catalog, Governance and compliance in Fabric in our documentation.

Right-click tab menu for easier multitasking

A new right-click menu has been implemented for horizontal tabs, enhancing the efficiency of tab management.

You can now:

  • Open in new browser tab: Opens the current item into a separate browser tab.
  • Pin tab: Pin important tabs to keep them always visible and easily accessible.

Refer to the multitasking improvements documentation to learn more.

Azure DevOps Service Principal & Cross Tenant Support (Generally Available)

This highly anticipated feature empowers Fabric customers to achieve a comprehensive set of automation processes.

Users can develop end-to-end automation flows, from Fabric workspace creation, to seamlessly connect it to their Azure DevOps repository which now can be even reside in a different tenant than their Fabric Home Tenant, using Fabric CLI or leverage Infrastructure as Code (IaC) using Fabric Terraform module, all powered by secure, scalable service principal authentication.

To learn more, refer to the Automate Git integration by using APIs documentation.

OneLake

OneLake diagnostics (Generally Available)

OneLake diagnostics, makes it simple to answer ‘who accessed what, when, and how’ across your Fabric workspaces. This enables federated data governance, operational transparency, and compliance reporting on a scale. Because events are stored in your Lakehouse as open JSON files, you can analyze them with the tools you already use—Spark, SQL, Eventhouse, Power BI, or any solution that ingests JSON logs. Because OneLake implements the ADLS and Azure Blob Storage APIs, all that data is accessible outside of Fabric too!

OneLake diagnostics can be enabled in workspace settings

For more in-depth information, refer to the Gain End-to-End Visibility into Data Activity Using OneLake diagnostics (Generally Available) blog post.

OneLake security ReadWrite permission (Preview)

OneLake security now supports ReadWrite access controls, giving data owners the ability to define precise permissions for how users can write to data in lakehouses. This enhancement allows teams to assign ReadWrite access to workspace Viewers or users with only Read access. This allows users to write data to tables and folders without having elevated permissions in the workspace to create and manage Fabric items. It’s a critical step toward enabling secure, collaborative workflows without compromising governance.

With ReadWrite access, all OneLake write operations can be performed through Spark notebooks, the OneLake File Explorer, or OneLake APIs. This allows teams the flexibility of ensuring the principal of least privilege is followed while also enabling key workflows involving uploading pdfs or excel files for further analysis.

To learn more about how ReadWrite access works in OneLake security, check out OneLake security access control model (preview) documentation.

Databases

SQL database in Fabric (Generally Available)

Built on the trusted SQL Server and Azure SQL Database engine, this is the first fully SaaS-native operational database experience within Microsoft Fabric. It empowers developers, data engineers, and IT professionals to build scalable, secure, and intelligent applications faster than ever.

To learn more, refer to the SQL database in Fabric documentation.

SQL Auditing (Preview)

Auditing is needed mainly for Security and Compliance reasons, to ensure transparency about everything that happened in the database. Its logs can later be used to:

  1. Compliance auditing (HIPAA, SOX), support investigation and threat analysis
  2. Monitoring database activities
  3. Tracking permission changes

To learn more, refer to the SQL Auditing documentation.

Customizable PITR backup retention (Generally Available)

Currently Point-in-time restore backups default to 7 days. We are now enhancing the ability to customize this retention to anywhere from 1 to 35 days, depending on your business needs for data retention. This setting can be managed in the database settings in the Backup retention policy option below.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the SQL database backups documentation.

Customer-managed keys in SQL Database (Preview)

Microsoft Fabric already encrypts all data-at-rest using Microsoft-managed keys. But for organizations with strict data governance policies or regulatory requirements, Customer-managed keys (CMK) offer an additional layer of control and flexibility. With CMK, you can use your own Azure Key Vault keys to encrypt SQL database data in Fabric workspaces, giving you:

  • Compliance with industry-specific encryption standards
  • Key ownership and rotation control
  • Granular access management

To learn more about customer-managed keys in SQL Database, check out the full blog post.

New Tools in Copilot Sidecar Chat (Generally Available)

A major upgrade to Copilot for SQL database in Fabric introduces new interactive toolsets in the Sidecar Chat for diagnosing performance, optimizing design & authoring SQL code. Try asking Copilot:

  • ‘Which queries are consuming the most CPU in my database right now?’
  • ‘List tables without a primary key or clustered index.’
  • ‘Find missing index recommendations for my database.’

Refer to the documentation for more information about the Copilot features in SQL database in Fabric.

Data Virtualization support for Fabric SQL database (Preview)

Data virtualization enables you to leverage all the power of Transact-SQL (T-SQL) and seamlessly query external data from OneLake, eliminating the need for data duplication, or ETL processes, allowing for faster analysis and insights.

Integrate external data, such as CSV, and Parquet, with your relational database while maintaining the original data format and avoiding unnecessary data movement.

To learn more, refer to the Data virtualization with Azure SQL Database (Preview) documentation.

Microsoft Python Driver for SQL database – mssql-python (Generally Available)

This milestone is an important advancement in offering Python developers a modern, performant, and user-friendly experience when working with SQL Server, Azure SQL Database, or SQL databases within Microsoft Fabric.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Microsoft Python Driver for SQL Server – mssql-python (preview) documentation.

Cosmos DB in Fabric (Generally Available)

You can now analyze live Cosmos DB data directly in Fabric—no complex or costly ETL required. Data stays in sync with OneLake, providing a single source of truth for real-time and historical insights. As a distributed NoSQL database, Cosmos DB in Fabric brings support for semi or unstructured data to analytics and ML workloads and a host of new capabilities to Fabric, including vector indexing and search using DiskANN and reverse ETL capabilities allowing customers to serve analytics to users with incredible speed at massive scale. Whether you’re building dashboards, running analytics, or training ML models, you can work on fresh operational data for faster, AI-ready insights.

This GA release delivers production-grade performance, enterprise security, and the scale of Cosmos DB’s low-latency architecture within the unified Fabric experience.

To learn more, check out the Getting Started with Cosmos DB in Microsoft Fabric Demo, refer to the What is Cosmos DB in Microsoft Fabric (preview)? documentation and visit our samples gallery.

Data Engineering

Spark connector for SQL databases (Preview)

The Spark connector for SQL databases is a high-performance library that makes the read/write of SQL databases in Fabric easier and seamless. This connector offers following capabilities:

  • This is preinstalled in the Fabric runtime, so you don’t need to install it separately.
  • Use Spark to run large write and read operations on Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM, and Fabric SQL databases.
  • When you use a table or a view, the connector supports security models set at the SQL engine level. These models include object-level security (OLS), row-level security (RLS), and column-level security (CLS).
  • Support multiple authentication methods and multiple write modes when writing data to the database.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Spark connector for SQL databases documentation.

ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)

Microsoft and Esri have partnered together to bring spatial analytics into Microsoft Fabric for production scenarios. Our collaboration with Esri introduces cutting-edge spatial analytics integrated within Microsoft Fabric Spark notebooks and Spark job definitions (across both Data Engineering and Data Science experiences).

The following is an example of gaining insight into overall patterns of features and their associated values for different distributions or across different time periods for the Total Insured Value:

Inserting image...

This integrated product experience empowers Spark developers and data scientists to natively use Esri capabilities and run GeoAnalytics functions and tools within Fabric Spark for transformation, enrichment, and pattern / trend analysis of data across different use cases without any need for separate installation and configuration.

Here is an example of Total Insured Value by probability of hurricane force winds for the given geographical area:

ArcGIS GeoAnalytics offers a comprehensive suite of geospatial capabilities that cater to a wide range of applications. Esri is integrating components of the ArcGIS suite of products into Microsoft Fabric. Specifically, ArcGIS GeoAnalytics for Microsoft Fabric product brings a set of geospatial functions and tools functions and tools directly into the Fabric Spark environment to facilitate analysis of events, visualize relationships between places, and derive valuable insights from your data.

To learn more, refer to the ArcGIS GeoAnalytics for Microsoft Fabric (Generally Available) documentation.

Faster Notebook Loading with Progressive Rendering

Opening notebooks with large table outputs can be frustrating when performance slows down. Progressive rendering changes to that experience by loading display() outputs incrementally instead of waiting for everything to finish before you can interact. This means you can start editing, running cells, and exploring your notebook right away while the remaining display() outputs continue to render in the background.

For data-heavy notebooks or those with complex tables and visualizations, this improvement makes a big difference. Progressive rendering keeps the interface responsive, reduces wait times, and helps you stay productive without interruptions. It’s a simple yet powerful enhancement designed to make working with large notebooks smoother and faster.

Note that it currently only works with Spark notebook.

Check out our video blog to see progressive rendering in action and experience the difference!

Optimal Refresh for Materialized Lake Views (Preview)

This feature enhances refresh performance by determining the most effective refresh strategy (incremental, full, or no refresh) for your Materialized lake views.

The ‘optimal refresh’ feature is enabled by default, simply enable the delta CDF property for the source so you can immediately benefit from this capability.

For additional information, refer to the Optimal refresh for materialized lake views in a lakehouse documentation.

Fabric Data Engineering VS Code improvement

In response to recent feedback from the community, we are pleased to introduce a series of quality improvements to the Fabric Data Engineering VS Code extension.

A common request is to be able to open several Fabric notebooks in a single VS Code window. With the VS Code workspace feature, users can open multiple Fabric Notebooks from different Fabric workspaces within a single VS Code instance. By selecting the ‘Add to Workspace’ option, the notebook will be open within the same VS code windows with all existing notebook.

Another frequently requested improvement is the removal of the strict reliance on Conda. With the introduction of the ‘Microsoft Fabric Runtime’, users can now execute complete notebook code within a remote workspace, enforcing local desktop checks for Conda availability is unnecessary now. Consequently, this validation has been removed when the extension is activated.

For ISVs and partners, it is common to work with multiple tenants or accounts within the same tenant. During the sign-in process, users now have the option to select a different Fabric account, rather than taking the account already signed in to the Fabric portal as default.

All these changes are available in VS code marketplace now with the release of version 1.15.3

To learn more, refer to the Fabric Data Engineering VS Code experience documentation.

New features in Fabric User Data Functions, Ignite edition

The User Data Functions has been hard at work to bring you the functionality you need to tie together your Fabric architectures and create robust data solutions. This section will cover the latest features in Fabric User Data Functions for the Microsoft Ignite 2025 event. You can find all the updates in the Functions Ignite 2025 blog post.

A screenshot of a game

AI-generated content may be incorrect.

1. Fabric Activator support (preview)

You can now set up User Data Functions as actions from your Activator rules. To do this, create a new Rule for one of your event categories and select the new ‘Run Function’ action from the list. You can pass parameters from your events and set conditions to run your functions based on your events properties.

A screenshot of a web page

AI-generated content may be incorrect.

You can leverage this feature to create efficient real-time event processing experiences where every event is processed by an individual function run. To learn more, refer to the Activator integration documentation.

2. Variable Library integration

You can now connect to your Variable Libraries from User Data Functions. You can do this by using the Manage connections experience and creating a connection to your Variable Library items. You can leverage this integration to use different value sets inside your functions without making any code changes.

A screenshot of a computer

AI-generated content may be incorrect.

This feature works especially well with Fabric CI/CD where you can leverage different value sets for each of the environments you are working with. Learn more by reading the Variable Library integration article in the User Data Functions documentation.

3. Azure Key Vault support

Using a Key Vault is the best practice way to use secrets, such as API keys, passwords, and certificates. With this method, you can access any Azure Key Vault that your user account in Fabric has access to. Make sure to assign Reader permissions to your account for your secrets.

A screenshot of a computer program

AI-generated content may be incorrect.

This feature is helpful for writing functions to consume external APIs securely. To learn more, refer to the Azure Key Vault connection documentation.

4. Cosmos DB support

You can now use a native-programming approach to connect to your Cosmos DB databases hosted on Fabric or Azure. Cosmos DB allows you to quickly set up the data tier of your architectures since you don’t need to create a schema. You can store JSON documents with any structure: properties, arrays, nested objects, etc.

A screenshot of a computer

AI-generated content may be incorrect.

You can get started by retrieving your endpoint from your Fabric Cosmos DB item and using any of the samples included in the Portal editor. To learn more, refer to the Cosmos DB documentation.

And that’s it! Make sure to visit the Functions Ignite 2025 blog post to read the full updates. To learn more about this function, refer to the Fabric User Data Functions documentation.

Azure Artifact Feed in Fabric Environment (Preview)

Fabric Environment now supports installing Python libraries directly from Azure Artifact Feed! This new capability makes it easier and more secure for teams to manage and deploy custom libraries on scale.

To install the packages from your Azure Artifact Feed, you need to set up the connection in Azure admin portal and then specify it in an Environment.

Set-up connection in Fabric – In Fabric, the connections need to be set-up through Connections component. When creating the new connection, choose ‘cloud’ type and ‘Azure Artifact Feed (Preview)’ connection type. Ensure you select the checkbox ‘Allow Code-First Artifacts to access this connection’. Record the connection ID after successfully creation, this is needed for specifying the connection in Environment.

Installing the packages in Environment from your Azure Artifact Feed – Along with the availability of the Azure Artifact Feed supportability, Fabric Environment has introduced a brand new UX experience to better support the private repositories management. You can now copy, paste, and edit your YAML configuration directly in the Fabric UI, making it simple to manage libraries from both public and private sources.

You can now use the new YAML editor in Fabric Environment to list your dependencies and reference your Azure Artifact Feed connection. Note that the Azure Artifact Feed URL needs to be replaced by the Connection ID to be correctly recognized by Environment.

To learn more about this feature, refer to the library management in Fabric environments documentation.

Data Science

Connect Data Agents to your Azure Search Index in Microsoft Foundry

Data agent creators can now connect their agents directly to Azure AI Search indexes built in Microsoft Foundry, unlocking powerful unstructured data scenarios. Using the resource URL, you can securely connect to your index—data agents fully respect the permissions of your Azure AI resources.

In Microsoft Foundry, you can craft rich AI Search indexes with custom enrichments, preprocessing logic, and tailored schemas for PDFs, text files, and more. Once connected, Data Agents can reason over that unstructured content and even join insights from your AI Search index with your structured data sources, giving you a unified, intelligent view across all your data.

To learn more about how you can connect your AI Search index from Microsoft Foundry, refer to the Configure your data agent documentation.

Updated Example Query and Instruction Data Agent Limits

We’ve increased several key limits to give creators more room to guide and shape Data Agent behavior. Data Source Instructions for Eventhouse KQL databases now support up to 15,000 characters (previously 5,000), giving you far more space to describe schemas, business logic, and edge cases. Example Queries have expanded from 1,000 to 5,000 characters, allowing richer examples for complex question patterns. These changes give you greater flexibility and control when tuning how your Data Agent interprets and answers user prompts.

To learn more about the data agent configurations, refer to the Data agent configurations documentation.

Fabric AI Functions Enhancements (Generally Available)

Apply powerful functions such as ai.extract(), ai.classify(), ai.generate_response(), and more, to turn your data into insights with just a single line of code.

This release also introduces new parameters for greater flexibility and control, such as response_format in ai.generate_resoponse() to define your output structure, and instructions in ai.summarize() to provide additional context to the LLM. We’ve also added support for advanced configurations when using gpt-5, such as verbosity and reasoning_effort.

Finally, we’ve increased default concurrency for faster execution and expanded Microsoft Foundry model integration to PySpark, allowing you to run AI Functions on models beyond OpenAI. These updates will be generally available across all geographies in the coming weeks.

To learn more, refer to the Transform and enrich data with AI functions documentation.

MCP Server support in Fabric data agents

Support for managed MCP server endpoints is now available in Fabric data agents. This enables seamless interoperability across the AI ecosystem, allowing external systems and services to tap into the rich domain expertise of data agents and access curated knowledge stored in OneLake.

This enhancement makes it easier for organizations to connect intelligent agents with enterprise-grade data—unlocking more powerful, context-aware insights from both structured and unstructured sources.

To learn more about how to consume your Fabric data agent as MCP server in VS Code refer to Data agent MCP Server documentation.

Fabric data agents now integrate with Microsoft 365 Copilot

Data agents in Fabric now deliver enhanced integration with Microsoft 365, enabling seamless access to enterprise data within M365 Copilot. Through this integration, users can query and reason over curated data in OneLake directly from M365 applications, bridging productivity and analytics workflows.

A screenshot of a computer

AI-generated content may be incorrect.

This capability extends M365 beyond document-centric interactions, allowing Copilot to leverage governed, enterprise-grade data models for more precise, context-aware responses and insights.

To learn more about to consume your Fabric data agent in M365 Copilot refer to the What’s New for Fabric Data Agents at Ignite 2025 blog.

Leveraging Prep your data for AI for semantic models in Fabric data agent

Fabric data agent now fully supports Prep for AI customizations in Power BI semantic models. When you add a semantic model to the Fabric data agent, any customization you make in Prep for AI is automatically respected. This includes AI instructions, verified answers, and data schemas that you’ve defined in the semantic model.

Using Prep for AI helps you guide the model to focus on the right tables, use preferred terminology, and rely on verified information. As a best practice, review and refine these settings before connecting your semantic model to the data agent. This ensures more accurate and context-aware responses when users query the data agent.

Learn more about how to leverage Prep for AI when adding a semantic model to your Fabric data agent.

Fabric Data Agents to Support SQL Databases

Data Agent in Microsoft Fabric has expanded its capabilities to include direct support for SQL Database Artifacts, eliminating the need for intermediary lakehouses. This enhancement allows users to connect Fabric SQL databases in addition to other mirrored SQL databases directly to Data Agent.

Once connected, Data Agent leverages its NL2SQL engine to translate natural language queries into SQL, enabling instant insights through the SQL Analytics Endpoint. This integration simplifies workflows and accelerates decision-making by making structured data in operational databases accessible through conversational AI.

Fabric Data Agent Entry Points in Data Warehouse and Lakehouse

Connecting your Lakehouses and Warehouses to the Data Agent is now dramatically simpler. A new entry point in the ribbon of both Lakehouse and Warehouse interfaces allows you to instantly create a Data Agent or add your data source to an existing one.

This streamlined experience automates both Data Agent creation and Data Source addition, significantly reducing the time and effort required to get started with building and integrating data agents.

Upgrade your machine learning tracking system

We’ve made a foundational upgrade to the machine learning tracking system in Microsoft Fabric. This update prepares your workspace for upcoming enhancements, such as advanced tracking capabilities and streamlined cross-workspace model logging, while keeping your day-to-day experience unchanged.

The upgrade is available as an option and can be initiated at your convenience. You can start upgrading from either an ML artifact or through your workspace settings.

Detailed instructions are available in the Upgrade your machine learning tracking system documentation.

Stay tuned for more improvements as we continue making your data science journey smoother and more rewarding!

Supporting Internal Python Packages in ML Model Endpoints

We’ve made an important improvement to machine learning model endpoints. Previously, real-time endpoints couldn’t use internal libraries—a limitation many users faced. Now, you can activate machine learning model endpoints built using AutoML and FLAML, making it easier to deploy and manage your models with these powerful tools.

Data Warehouse

IDENTITY columns (Preview)

IDENTITY columns (Preview) in Fabric Data Warehouse, is a long-awaited feature that simplifies surrogate key generation during data ingestion. IDENTITY columns automatically produce unique values for each new row, eliminating the need for manual key assignments and eliminating the risk of key duplication and key integrity issues.

Creating a table with an IDENTITY column, inserting a row, and querying its values

This system-managed approach ensures uniqueness across the Fabric Warehouse distributed engine, even when separate data ingestion jobs start in parallel. For more information about IDENTITY columns in Fabric Data Warehouse.

For more information, refer to the IDENTITY documentation.

Data Clustering (Preview)

Data Clustering (Preview) unlocks significant performance gains and reduced consumption use for queries. By organizing rows with similar values together during ingestion, Data Clustering enables aggressive file pruning, only scanning files with data that match query predicates.

Comparing a query that uses a regular table with one that uses Data Clustering

This optimization is powered by a sophisticated algorithm that preserves data locality across multiple dimensions, outperforming traditional techniques like lexicographical indexes. For more information about Data Clustering in Fabric Data Warehouse.

For more information, refer to the Data Clustering documentation.

Warehouse Snapshots in Microsoft Fabric Data Warehouse (Generally Available)

Managing data consistency during ETL has always been a challenge for our customers. Dashboards break, KPIs fluctuate, and compliance audits become painful when reporting hits ‘half-loaded’ data.

With Warehouse Snapshots, Microsoft Fabric solves this by giving you a stable, read-only view of your warehouse at a specific point in time. Think of this as a true time travel database, an industry-first capability that sets us apart.

A screenshot of a computer

AI-generated content may be incorrect.

For more information on refer to the full blog post on Warehouse Snapshots in Microsoft Fabric (Generally Available).

Varchar(max) support

Fabric Data Warehouse and SQL analytics endpoints for mirrored artifacts now support large string and binary data using VARCHAR(MAX) and VARBINARY(MAX) types.

Data Warehouse lets you ingest, store, process, and analyze large descriptive text, logs, JSON, or spatial data, with up to 16MB per cell, without hitting the size limits for most of the data that is common in the warehouse scenarios.

The SQL endpoint for mirrored artifacts ensures large values from source systems are read without the previous 8KB truncation. For new tables, string and binary delta types are mapped to varchar(max) and varbinary(max) SQL types in SQL analytics endpoint. Existing tables with columns already storing large objects can be recreated to adopt the new data type or will be automatically upgraded to VARCHAR(MAX) on the next schema change. This is critical for preventing JSON corruption in mirrored Cosmos DB artifacts, where truncation could break queries due to malformed JSON. Stay tuned to the blog for updates on Varchar(max) support in Lakehouses.

Real-time Intelligence

Introducing the HTTP and MongoDB CDC Connectors for Eventstream

Two new connectors for Eventstream— HTTP and MongoDB CDC — that make it easier than ever to bring diverse, real-time data into Fabric Real-time Intelligence (RTI) for real-time analytics. You can find both connectors in the Real-Time Hub starting today.

HTTP Connector for Eventstream

The HTTP connector provides a no-code, configurable way to stream data from any REST API directly into Eventstream for real-time processing. With just a few clicks, you can:

  • Continuously pull data from SaaS platforms and public data feeds (e.g., CoinGecko, OpenWeather).
  • Automatically parses JSON responses into clean and structured events.
  • Get started quickly by selecting a predefined public API, entering your API key, and letting Eventstream prefill the required headers and parameters.

The MongoDB CDC Connector for Eventstream

The MongoDB CDC connector streams Change Data Capture (CDC) events from any MongoDB deployment— on-premises, cloud-hosted, or MongoDB Atlas —into Eventstream. It allows you to capture real-time database changes and stream them directly into Eventstream for immediate processing and analytics.

For more information, refer to the documentation on Add MongoDB CDC source to an eventstream and Add HTTP source to an eventstream.

Please note: We have begun rolling out this feature, it will be available in all regions by mid-December.

Start exploring the new connectors today and happy streaming!

Introducing Cribil Source in Real-Time Intelligence Eventstream (Preview)

The exchange of real-time data across different data platforms is becoming increasingly popular. We are pleased to announce that the Cribil source is now available in Real-Time Intelligence, allowing real-time data to flow into Fabric RTI Eventstream through our partnership with Cribil.

You can now add the Crbil source to Eventstream to create the Kafka endpoint. Then, use this Kafka endpoint information in Cribil to establish the connection. In the Cribil portal, select ‘Fabric Real-Time Intelligence’ as the destination and configure it with the Kafka details.

By integrating Cribil, you can utilize Cribil data sources to access real-time data from various platforms such as Splunk, SQS, etc. and then bring the data to Fabric, thereby broadening the range of data sources available to Real-Time Intelligence. Partnering with third-party data platforms improves flexibility and interoperability, enabling organizations to easily unify streaming data within a single analytics environment.

As a result, customers can take advantage of Fabric RTI’s capabilities for thorough analysis and insights, regardless of where their data comes from, and teams can quickly adapt to new business opportunities by integrating additional sources from third-party providers.

To learn more, and guidance on getting started, refer to the Cribils Source documentation.

Eventstream Activator destination (Generally Available)

In today’s data-driven world, speed matters. Real-time signals are everywhere—customer clicks, IoT telemetry, operational metrics—but they only create value when they lead to action. That’s why we’re excited to share Eventstream Activator destination, now generally available (GA) in Microsoft Fabric Real-Time Intelligence.

With Eventstream Activator destination, you can detect important patterns in your live data and trigger the right action automatically—no code required. Ingest and transform events in Eventstream, route them to Activator, and define simple rules for alerts, notifications, or workflows. It’s the fastest path from streaming signal to business outcome.

How it Works

Ingest & Transform in Eventstream – Connect diverse sources (telemetry data, apps, CDC, IoT, Fabric events etc.), then filter, enrich, or aggregate in Eventstream.

Route to Activator – Add Activator as a destination in your Eventstream topology. Choose the transformed stream you want Activator to monitor.

Detect & Act – In Activator, create rules for the patterns or thresholds you care about and configure actions (alerts, Teams notifications, workflows, and more).

Please note: We have begun rolling out this feature, it will be available in all regions by mid-December.

To learn more about setting up Activator destination in Eventstream, refer to the Add a Fabric Activator destination to an eventstream documentation.

Capacity Overview events (Preview)

This new capability provides administrators with real-time insights into the health and utilization of their Microsoft Fabric capacities.

Capacity Overview Events include two event types:

  • Capacity Summary – Delivers a point-in-time snapshot of capacity usage, based on a smoothed utilization metric.
  • Capacity State – Captures changes in capacity state, such as transitions to Paused or Overloaded.

With these events, organizations can proactively monitor capacity behavior, trigger automated workflows using Activator, or route events to Eventhouse or Lakehouse through Eventstreams for deeper analysis and long-term retention.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, check out the blog post and follow the step-by-step tutorial to get started. You can also explore the Fabric Capacity Events Accelerator, which includes prebuilt dashboards, templates, and best practices.

Entity Diagram in Eventhouse KQL Database (Preview)

As your KQL database grows, tables gather data from several Eventstreams, functions connect different tables, update policies move and transform data, and materialized views quietly keep aggregated data up to date – all working together behind the scenes

It’s powerful, but it can also be hard to see the full picture.

That’s exactly why we built the Entity Diagram – to give you a simple, visual way to explore how everything in your database connects. No more guessing where data comes from or where it goes, no more wondering what depends on what – just a clear view that helps you understand, troubleshoot, and design with confidence.

What is the Entity Diagram?

The Entity Diagram gives you a visual map of your database. It shows the relationships between your entities: tables, functions, materialized views, update policies, shortcuts, and continuous exports. It also shows cross-database relationships and Eventstream items that serve as data sources for tables, so you can instantly understand how data flows through your system.

You can view details, follow connections, and see what depends on what – all in one place.

View Ingestion Details

You can now see the number of records ingested for each table or materialized view. If the ingestion comes from an Eventstream, you can also see a node for the Eventstream item. If you click on the Eventstream, it will take you directly to it. You can track how data flows through update policies and how it is aggregated into materialized views, giving you a complete view of your data flow.

Spot Schema Violations

The Entity Diagram also flags schema violations between entities, such as broken references from functions to tables or columns, or update policies referencing functions or source tables that no longer exist. This helps you quickly identify and fix issues that might disrupt your data flow.

What’s in it for you

Whether you are a developer, data engineer, or analyst, the Entity Diagram helps you understand your KQL database clearly. You can explore how tables, functions, materialized views, update policies, and other entities are connected, track data flow including the number of records processed through tables and passed along update policies or materialized views, identify schema violations, and make confident changes with a complete understanding of your database.

To learn more, check out the View an entity diagram in KQL database (preview) documentation.

Automate advanced actions with Fabric Activator

Fabric Activator enables you to automatically take actions or send alerts whenever certain data conditions are met.

What’s new?

  1. You can now automate business logic by running User data functions (Preview) and Spark job definition as an action when your data changes.
  2. Access advanced actions for automation in Real-Time Hub (coming soon in Real-Time dashboard, Eventhouse, and KQL queryset), including:
  • Pass parameter values to Functions, Spark job definitions, Pipeline, and Notebook.
  • Create custom action (Power Automate workflow) that you can reuse across Activator rules.
  • Send Teams message to group chats and channels.
  • Customize email and Teams recipients and messages.

How it works

You can access this feature by creating an Activator item or access it in embedded experience like in Real-Time Hub (coming soon in Real-Time dashboard, Eventhouse, and KQL queryset). In Real-Time Hub for example, you can see ‘Set alert’ button when browsing data sources like Azure events, Fabric events, or Eventstream. Selecting ‘Set alert’ will open a side pane where you can set up conditions and actions, including notifications, Fabric activities, and custom action. By creating the rule, you can automate your business process leveraging Activator.
A screenshot of a computer

AI-generated content may be incorrect.

Try it out and share your feedback!

To try this feature now, head over to Fabric. We look forward to hearing from you, if you have any feedback or ideas, join the discussion in the Activator community.

Operations agent (Preview)

With operations agent, users can create autonomous agents that monitor data, infer goals, and recommend actions. These agents dynamically construct plans based on business objectives, data sources, and available actions, keeping a human in the loop while enabling automation when desired.

To get started with operations agent, you give it access to specific Eventhouse sources, define business goals and instructions, and specify actions integrated through Power Automate.

A screenshot of a computer

AI-generated content may be incorrect. The agent then builds a plan to achieve those goals. It sets up monitoring rules, always grounded with data from the Eventhouse, and then watches for events that match those rules behind the scenes.

A screenshot of a chat

AI-generated content may be incorrect. When those conditions are met, the agent wakes up and starts to reason over the data. It looks at the actions it’s been configured with and makes recommendations back to the user based on what it deems is most appropriate at the time, along with context about what caused the alert to fire. These are presented through Teams to notify the users and keep a human-in-the-loop.

A screenshot of a computer

AI-generated content may be incorrect.

You can try the operations agent out now! You need to enable both the Copilot/AI and operations agent tenant level settings, and have a workspace backed by a Fabric capacity (not Trial).

More information is available in our documentation. Learn about our other Real-Time Intelligence announcements.

Imagery file support in Maps (Preview)

Maps now support imagery files such as Cloud Optimized GeoTIFF (COG) and Raster PMTiles, in addition to vector spatial formats, enabling richer geospatial analysis with imagery references. Upload your imagery file to Lakehouse and simply right-click ‘Show on map’ to display it in the map view, offering the same seamless experience as existing static spatial formats like GeoJSON.

A screenshot of a map

AI-generated content may be incorrect.

(Source: European Space Agency, Copernicus Services. (2025). Sentinel-2 Level 2A. Retrieved from Sentinel-2 Level-2A | Planetary Computer)

A screenshot of a map

AI-generated content may be incorrect.

(Source: National Oceanic and Atmospheric Administration. (2024). NOAA Chart Display Service- ncds-20c. Retrieved from NCDS MBTiles Download)

To learn more, refer to the Create a map (preview) documentation.

Data labeling in Maps (Preview)

Data labeling settings are now available for all geometry types—points, lines, and polygons—reflecting your feedback.

These new customization options make it easier than ever to surface critical attributes directly on the map, so you can highlight key details with just a few clicks and keep your spatial insights front and center.

A screenshot of a map

AI-generated content may be incorrect.

(Source: Department of Education. (2024). Public School Locations 2021-22. Retrieved from Public School Locations 2021-22 – Catalog)

A screenshot of a map

AI-generated content may be incorrect.

(Source: Department of Agriculture, U.S. Forest Service. (2017). National Forest System Trails (Feature Layer). Retrieved from National Forest System Trails (Feature Layer) – Catalog)

To learn more, refer to the Customize a map (preview) documentation.

Copilot assisted real time data exploration (Preview)

Copilot-assisted real-time data exploration enables users to analyze live data using natural language. This new feature integrates Copilot into Real-Time Dashboards and Real-Time Hub, allowing you to explore the data behind dashboard tiles and tables simply by asking questions.

You can instantly filter, break down the data, compare timeframes, and uncover insights without writing any query language.

With advanced no-code tools, you can also manually adjust and fine-tune the visuals generated by Copilot. Once you are done exploring the data and are pleased with the visual representing the derived insight, you can save it as a new tile on dashboards.

To begin, simply open a Real-Time Dashboard in Fabric and use the Copilot pane or the inline Copilot prompt on any tile to start chatting with your data.

To learn more, refer to Explore real-time dashboard data using Copilot documentation.

Data Factory

Enterprise readiness

Snowflake Connector Supports to Use Key Pair Authentication Method

Key pair authentication is now available for connecting with the Snowflake connector. When you create a new Snowflake connection or edit an existing one in ‘Manage connections and gateways’, you’ll find the ‘KeyPair’ authentication option.

A screenshot of a computer

AI-generated content may be incorrect.

After setting up the Snowflake connection with Key Pair authentication, you can easily use this connection in Pipeline, Copy job, Dataflow gen2, and Mirroring.

To learn more, refer to the Snowflake connector documentation.

Manual Update for On-premises Data Gateway (Preview)

This new capability simplifies gateway maintenance and helps ensure your environment remains secure and up to date. With this preview, you can now initiate gateway updates manually—either directly from the gateway UI or programmatically through API or script—giving you full control over when and how updates are applied.

The November release serves as the baseline version for this feature, and customers can start performing manual updates beginning in December. This enhancement also paves the way for future support of fully automatic updates.

To learn more, refer to the Update an on-premises data gateway documentation.

Certificate & Proxy Support for Virtual Network Data Gateway (Preview)

Virtual Network (VNET) Data Gateway, available in preview, enables secure and flexible enterprise connectivity by allowing certificate-based authentication for compliant gateway communication and proxy configurations for environments where direct internet access is restricted—empowering organizations to confidently use the VNET Data Gateway in highly controlled, security-focused network infrastructures.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about this feature, refer to the Manage virtual network (VNet) data gateways documentation.

Pipelines

Error Insights summary Copilot for Pipeline

The new Error Insights Summary Copilot in Pipeline is designed to make error handling smarter and faster. When dealing with pipelines that fail with dozens or even hundreds of errors, it can be overwhelming to investigate each issue manually. With this Copilot capability, you now get a concise, intelligent summary of all activity errors, complete with categorized insights, root cause analysis, and actionable recommendations.

Whether you’re in the Pipeline Monitoring or Authoring page, simply click on the ‘Error Insights’ button for any failed run to activate Copilot. Instead of clicking through each error individually, you’ll instantly see an insight into the categorized errors to help you understand what went wrong and how to fix it.

For instance, when a pipeline failed with more than 102 errors, instead of reviewing each error individually, Copilot grouped them into three categories. Each category included an issues summary, root cause analysis, and recommended actions, making the process more efficient and saving time. This feature greatly improves the intuitiveness and productivity of pipeline troubleshooting.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Get started with Copilot for Pipelines documentation.

Natural Language to Generate and Explain Pipeline Expressions with Copilot (Preview)

Building Pipeline expressions can be complex — but it doesn’t have to be. We are introducing a new Copilot capability (currently in preview) that transforms how you create and understand Pipeline expressions in Fabric Data Factory!

What’s new with the Pipeline Expression Builder Copilot?

You’ll find this Copilot built inline in the Pipeline Expression Builder, where you chat with Copilot, just like you would with our other Data Factory Copilot offerings.

You can now generate expressions using natural language. Simply describe what you need in your expression – Copilot will translate your intent into accurate pipeline expressions for you.

You can also use Copilot to explain existing expressions in plain language. No more decoding syntax. Copilot provides clear, contextual explanations, so that you understand what your Pipeline expression is doing.

Pipeline expressions are powerful but often intimidating. This feature helps boost your productivity by reducing manual coding, minimizing errors, and empowering everyone to build robust pipelines confidently.

To learn more, refer to the Copilot Pipeline Expression documentation.

Hierarchical view for pipelines in Monitoring Hub

Managing your complex orchestration workflows just got easier with Hierarchical view for Pipelines in Monitoring Hub! Jobs are often triggered automatically, and pipelines are one of the most common examples of this. With Hierarchical view, you can now:

  • Navigate across layers of jobs: Seamlessly explore upstream and downstream jobs within a Pipeline.
  • Trace dependencies: Quickly locate related jobs and understand how they connect, giving you full visibility into your workflow.

To utilize Hierarchical view in Pipelines, navigate to the Column options in the Monitoring Hub page. Toggle on ‘Upstream run’ and’ Downstream runs’.

You should now be able to see hierarchical views of your Pipeline runs.

This feature empowers you to monitor and troubleshoot with confidence, ensuring smooth operations across all your automated processes.

To learn more, check out our documentation on How to monitor pipeline runs in Monitoring hub.

Connectivity

Spark & Impala 2.0 Connectors (Generally Available)

This release brings enhanced performance and security to Fabric workloads, particularly when working with large datasets in Dataflow Gen 2.

Why does this matter for Dataflow Gen 2?

Implementation 2.0 for the Spark and Impala Connectors is now generally available in Microsoft Fabric , built on the open-source Arrow Database Connectivity (ADBC) driver . This release brings enhanced performance and security to Fabric workloads, particularly when working with large datasets in Dataflow Gen 2 .

  • Faster data access: The ADBC-based implementation eliminates serialization and copying, reducing overhead.
  • Secure by design: Memory safety and garbage collection align with modern secure development lifecycle (SDL) standards.
  • Optimized for scale: Perfect for complex pipelines and large-scale transformations in Dataflow Gen 2.

Beginning November 2025, specify [Implementation=”2.0″] in your connection settings to utilize these improvements:

  • For Spark: ApacheSpark.Tables(“http://server.cloudapp.azure.com:10001/cliservice”, 2, [BatchSize=null, HierarchicalNavigation=true, Implementation=”2.0″])
  • For Impala: Impala.Database(“server.cloudapp.azure.com”, [Implementation=”2.0″])

A screenshot of a computer

AI-generated content may be incorrect.

For further details, please refer to the Spark Connector and Impala Connector documentation.

Dataflows

Modern Get Data in Excel Desktop

Modern Get Data experience in Excel Desktop is designed to make finding and connecting to your data sources easier than ever. Instead of navigating multiple menus or guessing where to start, you can now access all available data sources in one central place. Simply go to the Data tab in Excel and click Get Data to explore a wide range of connectors. Whether you prefer scrolling through the list or searching directly, finding the right source is now fast and intuitive.

This new experience also includes powerful search capabilities. You just need to type in keywords to quickly locate the data source you need. For example, connecting to your Fabric Lakehouse is now just a few clicks away, and you can immediately load your data into the Power Query editor for transformation. With this streamlined workflow, Excel has become even stronger for your analytics, helping you save time and focus on insights rather than set-up.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about Get Data experience, refer to the documentation.

New importing data flow supported in Modern Get Data Copilot

Copilot in Modern Get Data within Dataflow Gen2 allows you to ingest data from both the OneLake catalog and OneDrive for further analysis. These features make it easier to bring in data for your next transformation using natural language. A screenshot of a computer

AI-generated content may be incorrect.

When browsing data from the OneLake catalog, you can select your OneLake data from different workspaces by either choosing the data directly in OneLake or searching for your specific OneLake artifact.

Once you choose data from OneDrive or OneLake, you can quickly load it into the MGD Copilot for further transformation.

To learn more, refer to the Copilot in Modern Get Data documentation.

Fabric AI Functions integration with Dataflow Gen2 (Preview)

These functions bring generative AI capabilities directly into Microsoft Fabric, making it possible to perform advanced AI tasks without needing machine learning expertise. With an intuitive user experience, users can invoke large language models through Fabric’s built-in AI endpoint. The experience is simple and integrated—you can add AI-powered columns using the AI Prompt option in the Add Column tab, and prompts can automatically include the full row as context.

A screenshot of a computer

Description automatically generated

Once you’re inside the AI Prompt dialog you can provide the prompt of your choice and select what columns from your table you wish to pass as added context for the prompt.

A screenshot of a computer

Description automatically generated

This feature is designed to make AI accessible across Fabric experiences, including Dataflow Gen2 and notebooks.

The AI Prompt feature will begin rolling out globally the first week of December.

To learn more, refer to the Fabric AI Functions in Dataflow Gen2 documentation.

Power Query Language Service IntelliSense in Dataflow Gen2 (Preview)

IntelliSense powered by the Power Query Language Service is now integrated with Power Query Online and available inside Dataflow Gen2 as a preview feature.

A screenshot of a computer

Description automatically generated

What’s New?

  • IntelliSense Support: Enjoy syntax highlighting, auto-completion, and inline suggestions when writing M scripts in Power Query Online.
  • Improved Authoring Experience: Make editing and creating queries faster and less error-prone with real-time guidance.
  • Consistent Experience: Aligns with the familiar capabilities from Power Query Desktop, now in the cloud.

Why It Matters

This feature helps you:

  • Reduce errors when writing M code.
  • Speed up development with smart suggestions.
  • Work confidently in Dataflow Gen2 without switching tools.

Use Dataflows today and try out this feature!

Airflow job

New API for Apache Airflow Jobs for Files & Requirements

Job management APIs are now available for Fabric Apache Airflow jobs!

These APIs let you directly manage files and requirements within your Airflow projects—upload, update, and organize resources with ease. By streamlining these operations, you reduce manual steps and simplify automation.

Built for flexibility, these APIs help teams automate environment setup and maintain consistent deployments, making it easier to keep Airflow jobs organized and up to date.

For more information, refer to the API capabilities for Fabric Data Factory’s Apache Airflow Job documentation.

Now you can upload files to your Apache Airflow project from the UI

Managing your Apache Airflow workflows is now more convenient with the new UI-based file upload feature. This enhancement allows you to add files directly to your Airflow project through an intuitive interface, streamlining the process of updating configuration files, requirements, or other essential resources.

By eliminating the need for manual uploads via command line or external tools, this feature helps ensure your projects stay organized and up to date with minimal effort. Whether you’re onboarding new data or updating dependencies, the UI makes it easy to keep your Airflow environment running smoothly.

A screenshot of a computer

AI-generated content may be incorrect.

New Apache Airflow Job File Management APIs

The Apache Airflow Job File Management APIs have been released, representing significant progress toward enhancing the efficiency, security, and developer experience of workflow orchestration. These APIs are designed to give you full control over job files in your Apache Airflow environments, enabling seamless automation and integration across your data workflows.

The File Management APIs allow you to:

  • Upload and manage DAG files: Easily add new DAGs or update existing ones.
  • List and retrieve files: Get a complete view of your job files for auditing.
  • Secure file operations: Built-in support for role-based access ensures enterprise-grade security.

To learn more about how to use these APIs, check out our documentation on API capabilities for Fabric Data Factory’s Apache Airflow Job.

Mirroring

Mirroring for Snowflake Iceberg Tables Support

Microsoft Fabric now supports Iceberg tables for Mirroring with Snowflake. Users can seamlessly bring both managed and Iceberg tables from their Snowflake environment into Fabric, enabling unified analytics and data management.

During setup, Fabric automatically detects and distinguishes between managed and Iceberg tables, giving users the flexibility to mirror future tables as they’re created. For Iceberg tables, which reside in customer-owned storage, Fabric allows you to select your preferred storage provider, ensuring secure and direct connectivity to your data.

Once configured, managed tables replicate (with row counts visible as they sync) while Iceberg tables are surfaced via shortcuts in the same mirrored DB. Analysts can preview, query via the SQL endpoint, and join Iceberg and managed tables together just like any other tables in Fabric—no special steps or rewrites. This update streamlines cross‑platform analytics and accelerates time‑to‑insight for Snowflake customers adopting Fabric.

For more information about Iceberg Support in Mirroring for Snowflake, please refer to Snowflake Mirroring Iceberg Support documentation.

Mirroring for SAP (Preview)

With Mirroring for SAP via SAP Datasphere, Microsoft Fabric provides seamless integration with SAP utilizing SAP Datasphere’s data extraction capabilities to mirror SAP data directly into Fabric for unified analytics and enhanced business insights. The integration delivers near real-time access to data across the entire SAP application landscape, including SAP S/4HANA (both on-premises and cloud editions), SAP BW and BW/4HANA, as well as cloud solutions such as SAP SuccessFactors and SAP Ariba.

As a result, organizations benefit from up-to-date, reliable data for reporting and advanced analytics, seamlessly combine SAP data with other enterprise sources in Fabric, expedite decision-making processes, and fully leverage Fabric’s comprehensive analytical capabilities in conjunction with their SAP investments.

To learn more about mirroring for SAP, refer to the Mirrored database from SAP documentation.

Mirroring for SQL Server (Generally Available)

This milestone marks a significant step forward in our mission to provide seamless, near-real-time data replication capabilities, empowering you to derive maximum value from your SQL data with Microsoft’s unified data platform.

Mirroring for SQL Server 2016-2022 and the newest version SQL Server 2025 offers continuous data replication from these sources into OneLake, ensuring that your data remains current and readily accessible for advanced analytics and reporting needs without complex ETL processes.

To learn more and to get started, refer to Microsoft Fabric Mirrored Databases from SQL Server documentation.

Mirroring for Cosmos DB (Generally Available)

Azure Cosmos DB Mirroring in Fabric allows you to seamlessly integrate your existing operational workloads with the analytical capabilities of Microsoft Fabric. Take advantage of the innovations in Fabric with SQL queries over JSON data and build real-time intelligence over your existing transactional data. Integrate with the Microsoft Fabric ecosystem of services, as well as Copilot-powered Microsoft Power BI, — all without having to stitch together separate services.

Azure Cosmos DB Mirroring in Fabric allows you to select which containers to mirror into Fabric, giving you total workload isolation and complete control over what data you build analytics on. Get the best of both worlds leveraging the SLA-backed latency and availability you have come to expect from Azure Cosmos DB with Microsoft Fabric’s array of analytical services and features, making it even easier now to bring your existing operational data and make accessible across the Fabric ecosystem.

To learn more about Mirroring Azure Cosmos DB (Preview), refer to the documentation.

Mirroring for PostgreSQL (Generally Available)

This release features enhancements such as support for PostgreSQL flexible servers hosted behind VNETs or Private Endpoints, as well as EntraID authentication for source database connections. Additionally, support for high-availability enabled servers has been introduced, ensuring business continuity for mirroring sessions through seamless failover—an advancement particularly valuable for enterprise-level scenarios.

With Azure Database for PostgreSQL Mirroring in Fabric, you can run all Fabric analytical workloads and capabilities on near-real time replicated data from your transactional sources without impacting on the performance of your production databases. This enables teams to unlock deeper insights and drive faster decision-making using the freshest data, all while maintaining the stability and responsiveness of operational workloads.

To learn more, please refer to the Mirroring Azure Database for PostgreSQL flexible server documentation.

Mirroring for Azure SQL Database – support for UAMI (Preview)

Fabric Mirroring of Azure SQL Database with UAMI (User Assigned Managed Identity) is now in preview mode. To use UAMI, customers need to provide additional parameters to Fabric and ensure the primary identity on SQL has permissions to publish to the mirrored database artifact in Fabric on primary identity changes. To learn more, and guidance getting started, refer to the documentation.

To learn more, refer to the Tutorial on Azure SQL Database.

Copy job

Expanded CDC Support for More Sources & Destinations

Copy job now supports CDC (Change Data Capture) for even more sources, including SAP via Datasphere, Snowflake, and Google BigQuery. With this enhancement, you can automatically capture inserts, updates, and deletions from these sources and replicate them to supported destinations—no watermark columns required, no manual refreshes, and no extra effort. This makes your data ingestion faster, more efficient, and reliable.

In addition, Copy job now supports merging CDC data into more destinations, including Fabric Lakehouse. You can seamlessly merge inserts, updates, and deletions from supported sources into Fabric Lakehouse tables, ensuring your data is always up to date.

What’s more, the monitoring experience of Copy job has been enhanced: you can now access more detailed statistics for each run, including watermark values, load type, and row counts for inserts, updates, and deletions, giving you full visibility over your Copy job.

To learn more, refer to the Change data capture (CDC) in Copy Job (Preview) documentation.

Full & Incremental Copy Subsets of data with Database Queries

You can now copy subsets of data from your tables using database queries, unlocking a wide range of data ingestion scenarios. For example:

  • Copy only data for a specific region from a table with a region column to ensure compliance in data ingestion.
  • Copy only the top N rows for testing or sampling.

More importantly, this feature supports both full and incremental copies on table subsets based on your custom queries, allowing flexible data selection and filtering before loading. This makes your data ingestion more efficient, precise, and tailored to your needs.

Please start by trying it with Azure SQL DB — support for more connectors that will be added soon.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to What is Copy job in Data Factory.

Truncate Destination before Full Copy

You can now optionally truncate destination data before the full load, ensuring your source and destination are fully synchronized without duplicates.

By default, Copy job does not delete any data in your destination. When you enable this option:

  • The first run of incremental copy will truncate all data in the destination before loading the full dataset.
  • Subsequent incremental copies will continue to append or merge data without affecting existing records.
  • If you later reset incremental copy to full copy, enabling this option will again clear the destination before loading.

This approach not only ensures that your destination remains clean, fully synchronized, and free of duplicates, but also delivers significant performance improvements during full loads, providing a reliable foundation for your data ingestion solution.

To learn more, refer to the What is Copy job in Data Factory documentation.

Copy Multiple Folders in one Copy job

You can now copy multiple file objects, including multiple folders or a combination of folders and individual files, in a single Copy job. This makes your development work more efficient, eliminating the need to create multiple Copy jobs to achieve the same result.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the What is Copy job in Data Factory documentation.

Connection Parameterization with Variable library for CI/CD (Generally Available)

From Simplifying Data Ingestion with Copy Job – Connection Parameterization, Expanded CDC and Connectors, we introduced the ability to deploy the same Copy job across multiple environments using the Variable library to inject the correct connection for each stage, enabling automated CI/CD processes by externalizing connection values.

This feature is now fully available in Copy job, allowing you to use it confidently in any production setting. You can easily connect to different data stores for development, testing, and production without having to change your Copy job each time.

To learn more, refer to the CI/CD for Copy Job in Data CI/CD for Copy job in Data Factory in Microsoft Fabric Factory documentation.

Developer tooling

Fabric VS Code extension is now open source

The Microsoft Fabric extension for Visual Studio Code has been officially released as open source, demonstrating Microsoft’s commitment to community collaboration and developer empowerment. You can find the source code for the extension on VS Code-fabric repository. This extension offers essential functionalities for managing Microsoft Fabric workloads directly from Visual Studio Code. It handles authentication and tenant management, workspace operations, CRUD actions on Fabric items, Git integration, and exposes APIs for satellite extensions to add specialized functions.

Figure: Microsoft Fabric extension for VS Code in extension marketplace

As an open-source project, contributions are welcomed on GitHub through submitting bug reports, requesting features, and making pull requests, following Microsoft’s open-source code of conduct.

Closing

Thank you for exploring these updates with us. To see the features in action, check out the November Monthly update video, that’s packed with demos!

Related blog posts

Fabric November 2025 Feature Summary

December 18, 2025 by Jovan Popovic

Unlock Flexible Time-Based Reporting with DATE_BUCKET() in Microsoft Fabric DW! Microsoft Fabric Data Warehouse continues to evolve with powerful features that make analytics easier and more adaptable. One of the latest additions is the DATE_BUCKET() function—a game-changer for time-based reporting.

December 18, 2025 by Anna Hoffman

What a year 2025 has been for SQL! ICYMI and are looking for some hype, might I recommend you start with this blog from Priya Sathy, the product leader for all of SQL at Microsoft: One consistent SQL: The launchpad from legacy to innovation. In this blog post, Priya explains how we have developed and … Continue reading “2025 Year in Review: What’s new across SQL Server, Azure SQL and SQL database in Fabric”