Microsoft Fabric Updates Blog

Welcome to the Fabric September 2025 Feature Summary! This month’s update is packed with exciting enhancements, such as new certification opportunities, the Power BI DataViz World Championships at FabCon Vienna, and major advancements in the Fabric Platform. Highlights include the general availability of the Govern Tab and Domains Public APIs and expanded Microsoft Purview protection and data loss prevention policies. Dive in to discover the latest improvements designed to empower your data experience.

Contents

Events and Announcements

Get certified in Microsoft Fabric

Join the thousands of other Fabric users who’ve achieved over 50,000 certifications collectively for the Fabric Analytics Engineers and Fabric Data Engineers roles. To celebrate FabCon Vienna, we are offering the entire Fabric community a 50% discount on exams DP-600, DP-700, DP-900, and PL-300.

Request your voucher.

Power BI DataViz World Championships – happening live at FabCon Vienna!

Four finalists are taking the stage at FabCon to compete for the title of world champion!

FabCon Vienna Power BI DataViz World Championships – and the winner is…

Congratulations to Paulo Grijó! Read more about the finals and all four finalists.

Fabric Platform

Govern Tab in OneLake Catalog (Generally Available)

In today’s data-driven world, effective data governance is crucial for ensuring the integrity, security, and usability of data. We’re excited to announce the general availability of the governance experience within the OneLake catalog. With this experience we’re empowering individual data owners with the tools and insights they need to govern and secure their data estate within Fabric.

Additionally, you can now chat with your data easily with copilot: In the ‘view more’ report, select the ‘Copilot’ icon and start chatting with your data to gain more insights, drill through to get more details on areas of interest and get overall summary of trends surfaced in the report.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the documentation for – OneLake catalog overview, Governance in OneLake catalog, and Governance and compliance in Fabric.

Domains Public APIs (Generally Available)

Microsoft Fabric data mesh architecture / federated architectures support organizing data into domains and sub domains, helping admins to manage and govern the data per business context with various delegated settings and the ability to create dedicated tags. Domains and sub domain structures enable data consumers to filter and discover content from the area most relevant to them.

Previous APIs will remain available until March 31st, 2026.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about Domains – REST API (Admin), and Domains, refer to the documentation.

Microsoft Purview Protection Policies for Fabric (Generally Available)

Microsoft Purview protection policies provide a powerful and automated way to enforce data governance and security within Microsoft Fabric. These policies leverage sensitivity labels from Microsoft Purview Information Protection to automatically restrict access to sensitive data assets, like Lakehouses or KQL databases, within Fabric. Instead of manually configuring access controls for every data item, organizations can define a single policy that applies a specific level of protection to all data with a given sensitivity label.

By integrating directly with Fabric, these policies allow you to apply granular controls, like blocking access for all but a select group of users, ensuring that only authorized individuals can interact with your most critical data. This automated and centralized approach reduces the burden on data engineers and IT admins, enabling them to focus on building data solutions while the platform handles the enforcement of security and compliance standards.

Refer to the Protection policies in Microsoft Fabric (preview) documentation to learn more.

Default Sensitivity Label for Domains (Generally Available)

Once defined, these labels apply automatically to all new items created in the specified domain, allowing you to use a centralized configuration to reduce human error and ensure consistent data handling across the organization.

A screenshot of a computer

AI-generated content may be incorrect.

Refer to the Domain-level default sensitivity labels documentation to learn more.

Microsoft Purview Data Loss Prevention Policies for Fabric (Generally Available)

In Microsoft Fabric, Data Loss Prevention (DLP) policies are a game-changer for organizations by acting as a proactive defense for sensitive data. These policies automatically identify and protect confidential information—such as personally identifiable information (PII), financial data, or intellectual property—as it’s created or moved within the Fabric environment, including semantic models and structured data in OneLake, such as Lakehouses, mirrored databases and more. This capability is critical for ensuring compliance with various regulations like GDPR and HIPAA, and it helps to prevent costly data breaches that can lead to significant financial loss and reputational damage. By providing real-time alerts to administrators and policy tips to users, DLP policies empower a culture of security awareness and reduce the risk of both accidental and malicious data leaks.

With this release we are announcing the general availability of DLP policies for your OneLake data, which means businesses can embed data security directly into their data workflows, moving beyond reactive measures to a more secure, proactive posture.

A screenshot of a computer

AI-generated content may be incorrect.

Refer to the documentation, to learn how to Get started with data loss prevention policies for Fabric and Power BI.

Variable library (Generally Available)

Beginning September 30th, the Variable library item will officially be Generally Available and will be supported as such in Pipelines. Additionally, support for Variable library is expanding beyond Shortcut for Lakehouse.

Variable library can now also be used in:

This capability is now extended to include Copy job scenarios, allowing you to replace static source and destination values with references to Variable library variables, and in addition, Dataflow Gen2 now supports Variable library in the Query Editor, enabling parameterization of elements such as source paths and DAX expressions.

A screenshot of a computer

AI-generated content may be incorrect.

Launched in April 2025, the Variable library has delivered significant value by enabling custom values and configurations across various release stages. It supports dynamic fields in data pipelines, variable in notebook code with NotebookUtils and parameterization of its default Lakehouse via %%Configure, and Lakehouse Shortcut sources.

Ready to streamline your pipeline and notebook configurations? Start using Variable library and explore how it can simplify your workflows.

New Design for Deployment pipeline (Generally Available)

A screenshot of a computer

AI-generated content may be incorrect. The new design for Deployment pipelines will be generally available beginning September 30th, 2025. As part of this rollout, the previous design will be deprecated and is scheduled for removal in the upcoming quarter. A formal notice will be issued 30 days in advance to ensure a smooth transition.

New Resources and Data Sources Added to the Terraform Provider for Microsoft Fabric

Introducing additional resources and data sources in the Terraform Provider for Microsoft Fabric, enabling broader Infrastructure as Code coverage.

With these additions, teams can automate even more aspects of their Fabric environment using Infrastructure as Code (IaC).

Newly Added Resources and Data sources

  • Connections
  • OneLake Shortcut implementation
  • Dataflow Gen2 implementation
  • Digital twin builder implementation
  • Apache Airflow Job implementation
  • Deployment Pipeline implementation
  • Copy job implementation
  • Mounted Data Factory implementation
  • Folders implementation
  • Deployment Pipeline Role Assignment
  • Warehouse snapshot

By expanding the set of supported resources, the Terraform Provider for Fabric makes it easier to:

  • Standardize automated deployment for Fabric.
  • Strengthen governance and security with automated role assignments.
  • Enable collaboration by treating Fabric configurations as versioned code.

Getting Started

  1. Upgrade to the latest version of the Terraform Provider for Microsoft Fabric.
A screenshot of a computer

AI-generated content may be incorrect.

2. Check the provider documentation for syntax and usage examples.

3. Add these new resources to your Terraform configuration (main.tf) to start managing them as code.

Fabric CLI is now Open Source!

Fabric CLI provides developers with a fast, scriptable, and intuitive way to navigate and operate Microsoft Fabric — whether locally or in CI/CD pipelines. Since its debut earlier this year, it’s already reshaping how teams automate workflows and manage their data estate.

Now, we’re taking the next step by opening it up to the community. This isn’t just about sharing code — it’s about unlocking the innovation of our developer community. The CLI is built with AI-assisted development in mind, so contributors can move faster than ever: surfacing real needs, building new capabilities, and shaping the CLI around what matters most to them.

We believe the best developer tools are built with the community, not just for it. So, if you’ve got an idea, a use case, or a feature request, jump in. Open an issue, suggest a feature, or contribute directly.

A screenshot of a computer

AI-generated content may be incorrect.

Example of creating an issue in the newly open-sourced repo of Fabric CLI

Check out the Fabric CLI repo. Let’s build the future of Fabric CLI, together.

Fabric CLI v1.1.0 is here

We’ve shipped v1.1.0 with a focus on usability, reliability, connectivity, and groundwork for AI-assisted contributions based on your top asks.

  • Output formatting, JSON – machine-readable output for automation and pipelines.
  • Folder support, organize items in folders/subfolders with predictable, path-like ops.
  • Command & argument autocomplete faster, fewer typos in interactive use.
  • Context persistence (command-line mode) cd once; subsequent commands run in context.
  • Support workspace private links tighter network boundaries for secure environments.
  • AI-assisted contributions, foundations that make it easier for AI agents (and humans!) to add new commands and capabilities.

Plus, a round of quality fixes and safety improvements.

Banner of v.1.1.0 of the Fabric CLI, introducing new features and capacities.

Refer to the Full changelog.

Introducing Fabric MCP (Preview)

Fabric MCP, is a developer-focused Model Context Protocol server that enables AI-assisted code generation and item authoring in Microsoft Fabric. It streamlines how developers build around Fabric’s public APIs and create Fabric items using built-in templates and best-practice instructions — reducing coding time, minimizing errors, and boosting productivity. Designed for agent-powered development and automation, it integrates with tools like VS Code and GitHub Codespaces as part of the Microsoft MCP initiative — and is fully open and extensible.

Asking GitHub Copilot in VSCode to generate a python script using the Fabric MCP

Asking GitHub Copilot in VS Code to generate a Python script using the Fabric MCP.

Get started with the: Catalog of official Microsoft MCP (Model Context Protocol) server implementations for AI-powered data access and tool integration.

Fabric Extensibility Toolkit (Preview)

The Microsoft Fabric Extensibility Toolkit is the next evolution of the Workload Development Kit. The new toolkit represents a significant step forward in enabling organizations to create and integrate data applications that show up in Microsoft Fabric. Now organizations and software development companies can build Fabric items within days or hours if they use the Copilot optimized Starter Kit.

By simplifying the development process and providing robust integration capabilities with the platform, the Extensibility Toolkit enables you to bring your data applications to Fabric Workspaces where your data and users are.

A screenshot of a computer

AI-generated content may be incorrect.

The Extensibility Toolkit builds on the foundation of the Workload Development Kit while introducing several key improvements and new capabilities.

Key advantages leveraging the toolkit:

  • Bring Your App to Fabric: Publish your organization’s data applications directly into Fabric workspaces as custom items
  • Leverage Your Platform: Seamlessly integrate Power BI reports, Spark jobs, the Fabric SaaS foundation and more
  • Rapid Development low cost to maintain: Launch in hours with LLM-ready samples and streamlined tooling

We look forward to seeing what you’ll build with the Microsoft Fabric Extensibility Toolkit. Whether you’re creating specialized data applications, custom visualizations, or data integrations, the Extensibility Toolkit will make your journey straightforward.

A screenshot of a computer

AI-generated content may be incorrect.

As part of this launch, we have also created a new Fabric Community Repository. This repository will contain a wide variety of item types built with the Extensibility toolkit you can add to your tenant.

The first release contains:

  • Package Installer allows users to deploy configured items (e.g. Lakehouse, Notebook, etc.) with definitions, data, shortcuts and much more.
  • OneLake Explorer can be used to view and edit the OneLake Storage of any Fabric data item on the platform directly.

Workspace-Level Workload Assignment (Preview)

This capability allows workspace admins to add additional workloads directly to their workspaces, eliminating the need for tenant or capacity-level setup.

Key highlights include:

  • Assign workloads from the Workloads Hub directly to a workspace.
  • Enable multiple workloads in a single workspace without impacting the rest of the tenant.
  • Maintain governance and security with tenant-level controls and Entra ID consent.
A screenshot of a computer

AI-generated content may be incorrect.

This update gives teams more flexibility to innovate while preserving organizational control. Refer to the documentation for how to Add a workload in the workload hub.

Fabric Multitasking Gets a Developer-Friendly Upgrade (Preview)

We’ve rolled out a set of UI enhancements in Fabric aimed at making multitasking smoother and more intuitive. Inspired by modern IDEs, these updates include:

Horizontal tabs for open items with clear labels.

Support for multiple active workspaces with color coding and numeric labels.

A screenshot of a computer

AI-generated content may be incorrect.

A new Object Explorer for structured navigation across open workspaces. A screenshot of a computer

AI-generated content may be incorrect.

The open item limit has been increased beyond the previous cap of 10, providing enhanced flexibility for developers managing multiple resources simultaneously.

These changes address common pain points around navigation, context switching, and multitasking — making Fabric more aligned with how developers work every day.

These improvements apply only to the Fabric experience, and do not affect the Power BI experience.

Learn more about the new developer friendly experience.

OneLake

OneLake Catalog secure tab

The OneLake catalog now features a dedicated Secure tab, offering a centralized view of security settings across your Fabric items. The Secure tab provides two powerful lenses into your security setup: users and security roles. The View Users page combines user permissions across workspaces, letting you identify users with privileges they shouldn’t have.

The View Security Roles page gives a holistic view of OneLake security roles across workspaces and item types. You can easily view existing OneLake security roles and even make updates to them inline. Powerful filtering options make it easy to find the exact roles and items you are looking for. It’s a streamlined experience designed for transparency and control.

You can get started with the Secure tab today or check out OneLake security access control model (preview) documentation.

Data Engineering

Fabric User Data Functions (Generally Available)

This feature provides a Fabric-native platform to host and run your business logic using Python functions. With Fabric User Data Functions, you can do the following:

New Features in Fabric User Data Functions

With Fabric User Data Functions now generally available, the following are the new features we’re introducing:

  • Test your functions using Develop mode: This feature allows you to execute your functions in real-time before publishing them.
  • OpenAPI spec generation in Functions portal: You can access the OpenAPI specification for your functions using the Generate code feature in the Functions portal.
  • Async functions and pandas support: You can now create async functions to optimize the execution for multi-task functions. Additionally, you can now pass pandas DataFrame and Series types as parameters to your functions using the Apache Arrow format.
Animated GIF showing the interface of Fabric User Data Functions.

To learn more, refer to the What is Fabric User data functions (Preview)? documentation.

Notebook Integration with User Data Functions (General Available)

Building on UDF in preview, we’ve made considerable advancements to native support for pandas DataFrames and Series, powered by Apache Arrow.

With the built-in UDF capabilities in NotebookUtils, you can:

  • Browse all functions within a UDF item.
  • Invoke specific functions directly from your notebook.
  • Explore metadata such as parameters, signatures, and return types with helper methods.
  • Quickly discover functions using IntelliSense and autocomplete.
  • Work seamlessly across multiple languages, including Python, PySpark, Scala, and R.
A green and red text

AI-generated content may be incorrect.

Additionally, we are introducing native support for pandas DataFrames and Series, enabled by deep integration with Apache Arrow.

  • Faster performance with Arrow-optimized data handling.
  • Scalability to process large-scale datasets.
  • Seamless compatibility with existing pandas workflows.

You can now pass pandas DataFrames directly into a UDF, operate on them efficiently, and return results – all with minimal overhead.

To learn more, refer to the Use Fabric User Data Functions with pandas DataFrames and Series in Notebooks documentation.

Fabric Materialized Lake Views (MLV)– New features

Materialized Lake Views (MLV) in Microsoft Fabric have received several powerful enhancements designed to improve refresh performance, lineage visibility, environment customization, and operational flexibility. These updates help organizations optimize data workflows, ensure transparency, and tailor resource usage to specific workload needs.

  • Optimal Refresh: Enhance refresh performance by automatically determining the most effective refresh strategy—incremental, full, or no refresh—for your Materialized Lake Views.
  • Lineage Enhancements: View lineage with source entities across Lakehouses in a Workspace, making it easier to trace data origins and dependencies.
  • Custom Environments for MLV Refresh: Associate specific environments and tailor configurations for varied workload needs, allowing you to optimize performance and resource usage during MLV refreshes.
  • Run on Demand: Perform refreshes on demand for the lineage without scheduling, available via both APIs and the Manage MLV UI.
A screenshot of a computer

AI-generated content may be incorrect.

Ready to get started, refer to the Materialized Lake Views (MLV) documentation.

Python Notebook (Generally Available)

Python Notebook is a pure Python experience built on top of Fabric notebook, designed for data analysis, visualization, and machine learning, providing a smooth Python coding and execution environment.

Key features

  • Multiple built-in Python kernels: Built-in Python 3.10 and 3.11 with native features like iPyWidget and magic commands. Users can easily switch kernels to match project needs.
  • Cost effective: Runs on a single-node cluster (2 vCores / 16 GB) for small-scale data exploration.
  • Lakehouse & Resources are natively available: Native integration with Fabric Lakehouse and built-in resources, with drag-and-drop code generation.
  • Mix programming with T-SQL: Python notebook offers an easy way to interact with Data Warehouse, SQL endpoints and SQL database. Users can use cell magic %%tsql or line magic %tsql to run T-SQL queries on Python runtime.
  • Support for popular data analytics libraries: Include DuckDB, Polars, Scikit-learn, providing a comprehensive toolkit for data manipulation, analysis, and machine learning.
  • Advanced intellisense: Powered by Pylance and Fabric’s language services for a modern coding experience.
  • NotebookUtils & Semantic link: APIs for leveraging Fabric and Power BI in code-first workflows.
  • Rich Visualization Capabilities: Built-in table/chart preview plus Matplotlib, Seaborn, Plotly, and PowerBIClient.
  • Common Capabilities for Fabric Notebook: All the Notebook level features are naturally applicable for Python notebook, such as editing features, AutoSave, collaboration, sharing and permission management, Git integration, import/export, etc.
  • Full stack Data Science Capabilities: Supports Data Wrangler, MLflow, and Copilot for advanced analytics and ML.

For more details, please refer to the documentation for Use Python experience on Notebook.

NotebookUtils new APIs (Generally Available)

These APIs have been designed to streamline and enhance your workflow by providing powerful programmatic capabilities for managing notebooks and lakehouse assets, as well as improving performance and usability.

  • RunMultiple: You can use notebookutils.notebook.runMultiple() to run multiple notebooks in parallel or in a predefined topological order. It uses a multi-threaded execution model within the same Spark session, all notebook runs share the same cluster, which can significantly improve compute resources utilization.
  • Fastcp: The notebookutils.fs.fastcp() provides a more efficient alternative to the traditional cp command. For heavy data workloads in Fabric, using fastcp is highly recommended to boost performance and save time.
  • Notebookutils.notebook CRUD APIs: You can manage Notebook items programmatically—create, read, update, and delete them—without manual steps. These APIs make it simple to integrate notebook operations directly into your data workflows.
  • Lakehouse utilities: notebookutils.lakehouse provides CRUD capabilities for Lakehouse assets such as tables and files. These utilities provide create, read, update, and delete operations, allowing you to focus on building solutions.
  • Runtime.context: With notebookutils.runtime.context you can get the context information of the current live session, including the notebook name, default lakehouse, workspace info, if it’s a pipeline run, etc.

For more details, please refer to the documentation for NotebookUtils (former MSSparkUtils) for Fabric.

Advanced Python intellisense – Pylance on Notebook (Generally Available)

This integration enhances Python development by providing intelligent code completions, precise error detection, and comprehensive code insights. It is supported for both PySpark environment and Python environment of notebook, and by default enabled for all notebook users.

Key highlights include:

  • Smarter and more relevant autocompletion
  • Enhanced support for lambda expressions
  • Rich parameter suggestions
  • Detailed hover information
  • Improved docstring rendering
  • Precise error highlighting

Pylance enhances the efficiency, accuracy, and overall experience of writing Python and PySpark code on Fabric notebooks, enjoy coding with Fabric!

Python notebook real-time resource usage monitoring

With the resource monitor pane, you can now track critical runtime information such as session duration, compute type, and real-time resource metrics—including CPU and memory consumption—directly within your notebook. This feature offers a clear and immediate overview of your active session, helping you stay informed about the resources your code is consuming as you work.

This enhancement provides better visibility into how your Python workloads are utilizing system resources, making it easier to optimize performance, control costs, and avoid unexpected out-of-memory (OOM) errors. By monitoring these metrics in real time, you can quickly identify resource-intensive operations, understand usage patterns, and make informed decisions about scaling or modifying your code. To start using it, simply ensure your notebook language is set to Python and start a session. The resource usage monitor will appear as a pane within the notebook interface, providing a seamless and integrated monitoring experience for all users working with Python code in Fabric notebooks.

Environment Public APIs (Generally Available)

This brings a new set of capabilities, improved contracts, and a migration and deprecation plan for existing APIs.

What’s new?

  • Create, get or update environment with definition.
  • Import, export or remove external libraries (staging & published state).
  • Upload or delete custom library.

How to migrate?

  • Existing APIs with contract update
    • Some APIs have updated response contracts (e.g., Publish environment, List staging/published libraries/Spark settings).
    • A new query parameter ‘preview’ is introduced to facilitate the transition of request/response contract changes. The ‘preview’ query parameter defaults to ‘True’ until March 31, 2026, making the preview contracts still available.
    • For migration, add the query parameter preview=False to start using the new GA contract.
  • Deprecation
    • Two preview APIs (Upload staging libraries and Delete staging libraries) will be deprecated on March 31, 2026. Please migrate to the new APIs as soon as possible.

For a full list of impacted APIs and migration guidance, please refer to the documentation for Manage the environment through public APIs.

Query Mirrored Databases in Spark Notebook

You can now connect and query mirrored databases (MirrorDBs) directly from your Spark notebooks, making it easier than ever to analyze data across your enterprise.

This update allows you to:

  • Add MirrorDBs as data sources in your notebook and run read-only Spark queries on mirrored database tables using SparkSQL or PySpark—just like you do with lakehouse data. The supported MirrorDBs including Azure Cosmos DB, Azure SQL Database, Snowflake and Open Mirroring. Coming soon, the support for other MirrorDBs.
  • No need to attach a default Lakehouse to run read-only queries; simply use fully qualified four-part names (workspace.database.schema.table) for precise querying and compatibility.
  • Join MirrorDB tables with lakehouse tables for deeper insights across all your Fabric data.

This feature expands your analytics capabilities and lets you securely access and analyze mirrored data alongside lakehouse assets, all in one place.

Download Files in Lakehouse Explorer

Download files directly from any Lakehouse item, empowering you to work more efficiently, reduce friction in your data workflows, and gain faster insights.

Capabilities

  • Download files from both table files and the File section (with required permission).
  • Keep your data secure and compliant by including Microsoft Information Protection (MIP) sensitivity labels for supported files.

OneLake data export must be enabled for your tenant to use this feature.

A white background with black text

AI-generated content may be incorrect.

Multi-Lakehouse experience

It’s now easier than ever to organize and access your data. You can now add, view and manage multiple lakehouses in a single unified view.

Capabilities

  • Quickly add multiple reference lakehouses that you have permission to ensuring secure collaboration.
  • Work with several lakehouses in a consolidated view, with clear distinction between your primary and reference lakehouses.
  • Sort, filter, and search across all connected lakehouses, schemas, tables, and files for faster data discovery.
  • Perform key actions like previewing data, creating subfolders, renaming, and deleting objects—all from one place.
A screenshot of a computer

AI-generated content may be incorrect.

This feature boosts productivity and gives you the flexibility to manage complex data estates, all without switching between different views or tools.

Fabric Spark Monitoring APIs (Generally Available)

Introducing advanced observability features and improved automation for managing Spark workloads in Microsoft Fabric!

  • New APIs for Single Spark Applications.
  • Spark Advisor API – Provides recommendations and skew diagnostics to help identify bottlenecks and optimize performance.
  • Resource Usage API – Offers granular metrics on vCore allocation and utilization for executors within a Spark application.

Advanced Filtering Support – The workspace-level API now supports filtering capabilities to help users narrow down applications by time range, submitter, application state (Succeeded, Failed, Running), and more! This enhancement allows for more efficient analysis and targeted troubleshooting in large-scale environments.

New Application-Level Properties for Deeper InsightTo support more transparent resource planning and monitoring, the following properties have been added to Spark Monitoring APIs:

  • Driver Cores & Memory
  • Executor Cores & Memory
  • Number of Executors
  • Dynamic Allocation Enabled
  • Dynamic Allocation Max Executors

These new fields help teams better understand and optimize their Spark resource allocations.

The Fabric Spark Monitoring APIs are now production-ready—providing a comprehensive solution for monitoring, diagnosing, and optimizing Spark workloads in a scalable, automated fashion.

Fabric Spark Run Series Analysis (Generally Available)

We’ve introduced a series of enhancements based on customer feedback and product readiness. These include improved accessibility through UI refinements that meet compliance standards, support for analyzing Spark applications while they’re still running, and a major upgrade to the anomaly detection infrastructure for more accurate and scalable outlier identification.

Fabric Spark Run Series Analysis is an advanced tool for understanding, comparing, and optimizing recurring Spark job executions. It now offers enhanced capabilities, greater accessibility, and a robust foundation to support enterprise-scale performance tuning.

Key Capabilities

  • Run Series Comparison – Compare the execution duration of a Spark run against historical runs within the same series. Drill into input/output data differences to identify root causes of performance variation.
  • Outlier Detection and Analysis – Automatically detect anomalous runs within a series and surface potential contributing factors—such as resource constraints or configuration changes.
  • Detailed Run Instance View – Explore individual run instances to access detailed time distribution metrics, offering insights into each phase of execution. Configuration values—both user-defined and auto-tuned—are also surfaced for reference and optimization.

Fabric Spark Applications Comparison (Preview)

This new capability enables developers and data engineers to analyze, debug, and optimize Spark performance across multiple application runs—whether you’re evaluating the impact of code changes or data variations.

The Spark Applications Comparison feature lets users select and compare up to four Spark application runs side by side. By visualizing and contrasting key execution metrics, you can quickly pinpoint performance regressions, improvements, and anomalies.

Key Benefits

  • Compare runs from the same artifact (Notebook or Spark Job Definition)
  • Detect performance regressions or gains by analyzing metric deltas against a baseline run.
  • Troubleshoot issues using detailed insights into execution time, I/O data trends, and resource utilization.

Data Science

Mirrored database support for Fabric Data Agent

Microsoft enables users to directly connect mirrored database artifacts including Azure Cosmos DBs, Azure SQL, Oracle, Snowflake, Databricks, and other databases using open mirroring. Including This integration allows users to leverage Data Agent’s Natural Language to SQL (NL2SQL) capability, so users can ask questions in plain English, and receive LLM- powered insights across their data estate.

Key benefits include broad compatibility, direct integration, customizable and scoped knowledge, and seamless bridging between external data and AI applications within Fabric. The feature streamlines analytics and empowers smarter, faster decision-making by unlocking AI-driven insights.

A screenshot of a computer

AI-generated content may be incorrect.

CI/CD support in Fabric Data Agent

Fabric Data Agents now support CI/CD, ALM flow, and Git integration, enhancing management, version control, and collaboration for Data Agent artifacts. These features promote reliable, scalable, and auditable development practices by enabling systematic management of changes, dedicated workspaces for development stages, and broad data source support. Git integration tracks all modifications, supports branching for independent experimentation, and enables controlled merging, improving teamwork and allowing quick reversion if issues arise.

A screenshot of a computer

AI-generated content may be incorrect.

To get started with CI/CD, ALM flow, and Git integration for your Fabric Data Agent, refer to the Fabric Data Agent documentation for step-by-step instructions.

Consuming Fabric Data Agent in external applications with Python client SDK (Preview)

This new feature empowers developers to integrate Fabric Data Agents into custom web apps and workflows, enabling natural language querying, automated reporting, and embedded insights—all while respecting user identity and permissions via Microsoft Entra ID.

Setup in Visual Studio Code, including cloning the sample client, configure authentication with InteractiveBrowserCredential, and use the DataAgentClient to interact with the agent. It’s a streamlined, developer-friendly way to extend Fabric’s intelligence beyond its native environment.

A screenshot of a computer

AI-generated content may be incorrect.

Explore this guide for step-by-step instructions.

Get feedback on example queries using the Fabric Data Agent SDK

The Fabric Data Agent SDK now offers evaluate_few_shot_examples(), a tool for creators to get structured feedback on their example queries.

For each example, the function assesses clarity (determining if the natural language question is clear and unambiguous), mapping (checking if the SQL query accurately reflects the intent of the question), and relatedness (verifying that all literals in the question are correctly mapped to corresponding literals in the SQL).

The evaluation then provides a final reasoning summary on the overall quality of each example query. With this capability, creators can refine their examples more effectively, ensuring their data agents deliver accurate, intuitive, and high-quality responses.

A screenshot of a computer

AI-generated content may be incorrect.

Want to learn more? Check out the Evaluate a Fabric Data Agent documentation.

Discover which query examples influenced the Data Agent response

When the Data Agent answers a question, it reviews the example queries you’ve provided and uses them to guide its reasoning. With our latest update, creators can view exactly which example queries were used during a run-step to help shape the agent’s response.

A screenshot of a computer

AI-generated content may be incorrect.

Behind the scenes, the agent searches all configured examples and selects the top 3 most relevant ones based on the user’s question. These examples are passed to the model as reference points when generating a query. Now, within the chat canvas, you’ll be able to view those selected examples, giving you more transparency into how the agent is grounding its response and making it easier to improve your configurations over time.

Download Diagnostics in Data Agent

Download a diagnostics file for any run step in the Data Agent chat canvas—giving you clear visibility into how the agent processed your question behind the scenes. The file includes details like which tools were used, how the question was interpreted, the intermediate reasoning steps, and any errors or fallback logic that occurred.

A powerful way to troubleshoot issues or provide rich context when working with Microsoft support. To use it, simply select the ‘Download Diagnostics’ button in the chat canvas. The file is fully viewable and editable, so you can review and clean up the contents before sharing as needed.

A screenshot of a computer

AI-generated content may be incorrect.

For more information, refer to the Evaluate your data agent (preview) documentation.

Apply AI Functions in Data Wrangler (Preview)

The ability to apply AI functions directly within Data Wrangler to transform your data quickly and visually is now available. AI functions allow you to perform tasks like text summarization, classification, translation, sentiment analysis, grammar correction, or your own prompt, all without writing complex code. See your transformations instantly with real-time previews, so you can make quick adjustments if needed.

A screenshot of a computer

AI-generated content may be incorrect.

A GIF showing how to use AI functions in Data Wrangler to instantly classify transactions on a bank statement data frame.

Simply select the desired function from AI Enrichments within the Data Wrangler interface. The results are interactive and editable, giving you full control over your data transformations while saving time and reducing manual effort.

AI-Powered Code Generation and Translation Now Available in Data Wrangler (Generally Available)

Accelerate data prep with AI-powered capabilities in Data Wrangler. In the visual interface, you can see smart suggestions from Microsoft PROSE for operations that are relevant to your data frame. Describe your desired transformations in natural language, and Copilot will generate code and an instant preview of the results.

A screenshot of a computer

AI-generated content may be incorrect.

A GIF showing how to apply AI-based suggestions, use Copilot to generate code from a natural language description, and automatically translate pandas code to PySpark.

For big data workflows, Data Wrangler can translate all your pandas operations back to PySpark, making it easy to scale your transformations across large datasets.

Data Warehouse

MERGE Transact-SQL (Preview)

This command blends INSERT, UPDATE, and DELETE operations all into a single statement based on your specified conditions between two tables, improving readability and providing a uniform standard for transformations across your ETL jobs.

A screenshot of a computer

AI-generated content may be incorrect.

For more details, refer to the MERGE (Transact-SQL) documentation, and try using MERGE today!

Migration Assistant for Fabric Data Warehouse (Generally Available)

This AI powered assistance helps you migrate your analytical warehouse or database from Azure Synapse Analytics dedicated SQL pool or SQL server database used for analytics work to Fabric Data Warehouse using a DACPAC file.

For more details, refer to the Migration Assistant for Fabric Data Warehouse documentation.

Databases

Copilot & Query Editor Enhancements

Introducing several enhancements to Copilot in SQL in Fabric and Query Editor:

  • Connect in SSMS with One Click: Instantly connect to your Fabric SQL database in SQL Server Management Studio (SSMS) without manual entry—streamlining developer workflows and reducing friction for data professionals.
  • Bulk Query Management: Delete multiple queries at once by simply holding Shift, selecting, and right-clicking, making workspace cleanup fast and intuitive.
A screenshot of a computer

AI-generated content may be incorrect.
  • Share Queries Across Teams: Create shared queries that any admin, contributor, or member in your workspace can view and edit, supporting collaborative development and knowledge sharing.
  • Copilot Mode Selector: Easily toggle between read-only and read/write modes. In read-only, Copilot generates SQL code but doesn’t execute changes; in read/write, Copilot executes code only after your approval, ensuring safe automation and governance.

To learn more about these features, refer to the Microsoft Copilot in Microsoft Fabric in the SQL Database Workload Overview documentation.

MSSQL extension for VS Code – Fabric Integration (Preview)

The latest version of the MSSQL extension for VS Code introduces Fabric connectivity and provisioning (Public Preview), bringing SQL database in Fabric directly into your editor. Instead of switching to the Fabric Portal, copying connection strings, or creating databases outside VS Code, you can now authenticate with Microsoft Entra ID, browse workspaces in a tree view, and connect instantly. You can also create a new SQL database in Fabric in just a few steps and start querying in under three minutes—eliminating context switching and making it faster to prototype and build modern apps.

Key highlights 

  • Fabric Connectivity: Sign in with Microsoft Entra ID and connect to Fabric workspaces directly from the Connection dialog.
  • Workspace Search & Browse: Navigate workspaces and resources in a tree view with built-in search for faster discovery.
  • Fabric Provisioning: Provision a SQL database in Fabric from the Deployments page and connect instantly in VS Code.
  • Cross-extension Flow: Launch connections from the Fabric extension or Portal using the ‘Open in MSSQL’ option.
  • Frictionless Development: Reduce context switching with a fully in-editor workflow for connecting, provisioning, and querying Fabric databases.
Figure: Fabric browser experience in the MSSQL extension for VS Code 

Watch the SQL database in Fabric connectivity and provisioning demos in MSSQL for VS Code and check MSSQL Extension for VS Code: Schema Compare, Schema Designer, Local SQL Server Container GA documentation to learn more.

Point-in-Time Restore (PITR): Precision Recovery

PITR allows users to restore a database to any specific moment within the configured retention window. Whether recovering from a user error, application bug, or security incident, PITR provides granular control to rewind your database to a known good state.

Until recently, PITR in Fabric SQL DB supported a 7-day retention window. With the latest update, this can now be extended to 35 days, offering significantly more flexibility for operational recovery and compliance scenarios.

To learn more about this feature, refer to the Restore a database from a backup documentation.

Git integration: system object references & shared queries

Fabric SQL’s Git integration is now rolling out, including the following features:

  • Validation of system object references (e.g., tables/views in the [sys] schema) during local development.
  • Tracking of shared queries, allowing teams to monitor changes over time and maintain version control across collaborative environments.
A screenshot of a computer

AI-generated content may be incorrect.

To learn more about this feature, refer to the Get started with Git integration documentation.

Database Definition Import/Export via REST API

Fabric SQL empowers developers with REST APIs to:

  • Export database object definitions as portable DACPAC files.
  • Import compiled definitions (DACPAC) to update existing databases, with automatic diff detection and application of changes.

To learn more about this feature, refer to the Create a SQL database with the REST API documentation.

Performance Dashboard Improvements

The performance dashboard now provides memory consumption metrics. This new feature offers real-time insights into memory usage by individual database queries, enabling better resource management and optimization. The dashboard now includes memory consumption alongside CPU usage, user connections, requests per second, blocked queries, database size, automatic index info, and query performance metrics for comprehensive monitoring.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about the performance dashboard, refer to the Performance Dashboard for SQL database documentation.

Real-Time Intelligence

Introducing Maps in Fabric

Geospatial insights into the Real-Time Intelligence workload are available with Maps in Microsoft Fabric. Users can ingest location data from a Lakehouse or Eventhouse, visualize it instantly, and build map-centric applications without specialized knowledge or writing code. Whether you are tracking mobile assets, analyzing campaign performance, or monitoring infrastructure, Maps adds context, reveals patterns, and helps you tell the story behind the numbers within minutes.

A map of the world

AI-generated content may be incorrect.

Refer to the Start with Fabric Maps documentation to learn more, connect your data, choose your layers, and start exploring. Spatial doesn’t have to be special!

Azure Monitor Logs Integration in Fabric via Eventstream

Azure Monitor Diagnostic Logs now integrate directly with Microsoft Fabric via Eventstream, enabling real-time ingestion of metrics and logs from Azure resources into Fabric-native analytics workflows for immediate transformation and analysis—eliminating the need for manual data pipelines.

Once ingested into Eventstream, data can be cleaned, shaped, and enriched with native tools—all without writing a single line of code—streamlining the path from raw telemetry to actionable insight.

This integration provides unified observability across Azure resources and business data, empowering teams to respond quickly and make informed decisions. Logs can be routed to Data Activator for instant alerts or to Eventhouse for deeper analysis, supporting both immediate operational needs and long-term strategic goals.

You can find the connector card ‘Azure Diagnostics’ under the Data Sources page in the Real-Time Hub. Select it to browse all available Azure resources and click through the configuration process to the metrics and logs you want to bring into Eventstream.

A screenshot of a computer

AI-generated content may be incorrect.

Workspace Private Link for Eventstream’s Select Sources & Destinations

Introducing the integration of Workspace Private Link with Eventstream, enabling secure, private connectivity between your data sources and Microsoft Fabric—without exposure to the public internet.

A workspace-level private link maps a workspace to a specific virtual network using the Azure Private Link service. With this integration in Eventstream, it allows you to restrict public internet access and enforce access only through approved virtual networks via managed private endpoints. This ensures that data streaming into Eventstream is tightly controlled and protected from unauthorized access.

The diagram demonstrates a typical Eventstream setup operating under Workspace Private Link.

A diagram of a computer

AI-generated content may be incorrect.
  • SQL DB1 connects to Eventstream and streams CDC events securely via Workspace Private Link.
  • SQL DB2 is blocked from connecting because public access to the workspace is disabled.

For more details about Workspace Private Link, refer to the Overview of workspace-level private links documentation.

Activator Just Got 10x More Powerful (Preview)

We’re excited to announce a major performance upgrade to Activator, our no-code, low-latency event detection engine that helps power Real-Time Intelligence in Microsoft Fabric. Activator now supports up to 10,000 events per second (EPS), a tenfold increase from the previous 1,000 EPS limit.

This change unlocks new possibilities for customers working with high-frequency data. Whether you’re monitoring sensor data, tracking business operations, or responding to user behavior in real time, Activator can now keep pace with even the most demanding workloads.

By scaling Activator’s throughput, we’re reinforcing our commitment to delivering fast, flexible, and frictionless automation for modern data applications. If you’re already using Activator, there’s nothing you need to do. Just enjoy the 10x scale. And if you’re not yet using it, now’s a great time to start.

We welcome your feedback as we continue to evolve Activator to meet the needs of real-time data applications at scale.

Ready to learn more? Refer to the What is Fabric Activator? Transform data streams into automated actions documentation.

Anomaly detection in Real-time Intelligence (Preview)

In today’s fast-paced digital landscape, spotting anomalies in your data as they happen can make all the difference. Whether you’re a seasoned data professional or a business user keeping an eye on operations, the new anomaly detection feature in Real-Time Intelligence puts powerful insights right at your fingertips—no coding required.

To get started select your Eventhouse and choose the ID fields and values to monitor for anomalies. The system analyzes your selection using a library of industry-standard models, automatically testing and recommending the best fit for your dataset. Meaning you can easily spot outliers, trends, or unexpected behavior without needing statistical expertise.

Once anomaly detection is set up, you get a preview of detected anomalies right in the interface. Try out different models to see which one best highlights the patterns you’re interested in, allowing you to validate the findings before sharing or acting on them.

A screenshot of a computer

AI-generated content may be incorrect.

When you’re happy with the results, publish anomaly events to the Real-Time Hub and set up alerts so you’ll be notified instantly—via Teams messages or emails—whenever an anomaly appears.

A screenshot of a computer

AI-generated content may be incorrect.

With these new anomaly detection capabilities, everyone from analysts to business users can act quickly on real-time insights. The no-code interface, automatic model selection, and flexible alerts make tracking changes and unexpected events easier than ever.

Data Factory

Dataflow Gen2

Accelerate Dataflow Gen2 Authoring with Preview-Only Steps (Preview)

Dataflow Gen2 empowers users with a visual, intuitive experience for shaping and transforming data directly within Microsoft Fabric. At the heart of this experience is the data preview pane, which evaluates each transformation step independently to ensure accurate previews tailored to your scenario.

However, rendering the full schema and data values can sometimes take several seconds — or even minutes — depending on factors like data source latency, query complexity, and evaluation overhead. In many authoring scenarios, you may not need to preview the entire dataset to validate your logic.

To streamline this experience, we’re introducing preview-only steps — a new capability that lets you designate specific transformation steps to run exclusively during authoring. These steps are excluded from refresh operations, allowing you to work with sample data without impacting production performance.

You can enable this feature by simply clicking any step in your query and selecting Enable only in preview. This is especially useful when paired with filtering steps, enabling you to isolate a subset of your data for faster iteration and validation.

Whether you’re building complex transformations or exploring new data sources, preview-only steps help you stay focused, responsive, and efficient—without compromising refresh integrity. A screenshot of a computer

Description automatically generated

As an accelerator, when connecting to a source that provides a file system view (like Azure Data Lake Gen2, SharePoint files, Local Folder and others) you will now see a small gear icon in the top right corner that will allow you to quickly add new preview only steps.

A screenshot of a computer

Description automatically generated

If you select the combine files option from this dialog, you will also be able to see another gear icon this time to add a preview only step to filter the sample transformation file.

A screenshot of a computer

Description automatically generated

This new feature will help you accelerate your work during authoring time without sacrificing any runtime evaluations.

We hope that you give this new feature a try and share your feedback in our Data Factory community forum.

To learn more about this feature, refer to the Preview only step in Dataflow Gen2 documentation.

Modern Evaluator (NetCore-based) for Dataflow Gen2 with CI/CD (Preview)

The Modern Query Evaluation Engine for Dataflow Gen2 (CI/CD) is a powerful enhancement that can substantially improve the performance of query evaluation in your dataflows. This new engine is designed to deliver faster and more efficient execution, helping you scale your data transformation workflows with greater speed and reliability.

The Modern Evaluator can be enabled directly from the Options dialog (Scale tab) in both existing and new Dataflow Gen2 (CI/CD) items.

A screenshot of a computer

AI-generated content may be incorrect.

While support is currently limited to a subset of connectors, it already includes many commonly used sources such as Azure Blob Storage, Azure Data Lake Storage, Fabric Lakehouse, Fabric Warehouse, OData, Power Platform Dataflows, SharePoint, and Web.

To learn more about this feature, refer to the Modern Evaluator for Dataflow Gen2 with CI/CD (Preview).

Partitioned compute for Dataflow Gen2 (Preview)

Microsoft Fabric now offers Partitioned Compute in Dataflow Gen2, enabling parallel execution of dataflow logic to significantly reduce evaluation time. This feature is perfect for handling large file sets, like those in Azure Data Lake Storage Gen2, where operations can be done simultaneously.

A screenshot of a computer

Description automatically generated

What it Delivers

  • Parallel Processing: Automatically partitions data sources and evaluates each partition concurrently.
  • Supported Connectors: Azure Data Lake Storage Gen2, Fabric Lakehouse, Folder, and Azure Blob Storage.
  • Combine Files Experience: Automatically generates partition keys to optimize performance.

Using internal benchmarks, we’ve seen how it has reduced run times of some Dataflows. An example using the New York City green taxi data from 2023 (12 parquet files), it reduced the processing time from 1.5 hours to under 25 minutes when loading to a Fabric Warehouse.

While results may vary, we welcome you to give this new feature a try and share your feedback with us.

To learn more about this feature, refer to the Use partitioned compute in Dataflow Gen2 (Preview) documentation.

Fabric Variable libraries in Dataflow Gen2 with CI/CD (Preview)

Microsoft Fabric now supports Variable libraries in Dataflow Gen2 with CI/CD. This new capability introduces dynamic configuration management across environments, helping teams streamline CI/CD workflows and improve reusability.

With this integration, dataflows can reference centralized variables using the Variable.Value and Variable.ValueOrDefault functions. This enables dynamic substitution of values, such as workspace or lakehouse IDs, based on environment-specific settings, eliminating the need for hardcoded parameters.

Key Benefits

  • Centralized configuration: Manage variables in one place across Fabric workloads.
  • Environment-aware dataflows: Seamlessly switch between dev, test, and prod environments.
  • Improved CI/CD support: Simplify deployment pipelines with this new variable integration with dataflows.
A screenshot of a computer

AI-generated content may be incorrect.

Designed to help teams build more maintainable and scalable data solutions in Microsoft Fabric. Try it out and share your feedback with us!

To learn more, refer to the Use Fabric Variable libraries in Dataflow Gen2 (Preview) documentation.

Parameterized Dataflow Gen2 using public parameters mode (Generally Available)

This powerful feature enables dynamic and flexible dataflow refresh by allowing parameter values to be passed externally—via the Fabric REST API or native Fabric experiences—without modifying the dataflow itself.

Updates and Improvements

Over the past few months in preview, we’ve introduced several enhancements based on user feedback:

  • New Parameters Section in Recent Runs: easily view the exact parameters and values used during each Dataflow refresh.
    A screenshot of a computer

Description automatically generated
  • Improved Error Messaging: clearer diagnostics when refreshes fail due to missing or mismatched parameters.
  • Expanded Data Type Support: use the following additional types in Public Parameters mode:
    • Date
    • DateTime
    • DateTimeZone
    • Time
    • Duration

This release marks a major step forward in making Dataflows more dynamic, reusable, and CI/CD-friendly.

To learn more about this feature, refer to the Use public parameters in Dataflow Gen2 (Preview) documentation.

New Discover Dataflow Gen2 Parameters API (Preview)

The new Discover Dataflow Gen2 Parameters API in Microsoft Fabric empowers developers and data professionals to programmatically retrieve all parameters defined within a Dataflow Gen2 with CI/CD that has the public parameters mode enabled. This capability is a key part of the broader public parameters initiative, which aims to make Dataflows more transparent, reusable, and automation friendly.

With a simple GET request to the endpoint users can access metadata about each parameter—its name, type (e.g., String, Boolean, DateTime), default value, and whether it’s required.

Why it matters
This API unlocks new possibilities for automation, governance, and integration. Developers can dynamically inspect parameter configurations before executing a Dataflow, ensuring compatibility and reducing runtime errors. It also simplifies building custom tooling and dashboards that visualize or validate parameter usage across workspaces.

How it fits into the bigger picture
The public parameters feature in Dataflow Gen2 is designed to make parameter definitions accessible and manageable across environments. By exposing parameters via REST, Microsoft Fabric enables seamless integration with CI/CD pipelines, monitoring tools, and external orchestration systems.

Whether you’re building enterprise-grade data solutions or lightweight automation scripts, the Discover Dataflow Parameters API is a foundational step toward more intelligent and scalable data operations.

Dataflow Gen2 Support for Incremental refresh to Lakehouse as a data destination (Generally Available)

With this update, you can now leverage incremental refresh to efficiently manage and update large datasets within your Lakehouse environments. This unlocks faster processing times and optimized resource utilization. This new capability empowers organizations to keep their Lakehouse data fresh while minimizing overhead and maintaining robust performance at scale. We look forward to seeing how this enhancement helps you streamline operations and deliver up-to-date insights with ease.

Screenshot of the dropdown menu in Dataflow Gen2.

Learn more about incremental refresh in Dataflow Gen2 in the Incremental refresh in Dataflow Gen2 documentation.

New Dataflow Gen2 data destinations (Generally Available)

New data destinations for Dataflow Gen2 are being introduced. Users can now write data directly to Lakehouse files (CSV) further expanding integration options for diverse analytics needs. Additionally, we’re offering an early sneak peek at Snowflake as a destination, while it’s entering preview. This glimpse gives you an opportunity to explore what’s ahead and plan how best to leverage Snowflake within your dataflows when it becomes generally available.

Whether you’re working with teams on Fabric or on the M365 platform, Dataflow Gen2 allows you to collaborate with everyone. This milestone reflects our commitment to supporting enterprise-grade data management for a wide range of use cases.

Finally, Incremental refresh support for Lakehouse tables in Dataflow Gen2 is now also generally available. This enables users to efficiently manage and update large datasets with faster processing and optimized resource utilization, ensuring your Lakehouse data remains fresh with minimal overhead. We look forward to seeing how these enhancements help you streamline operations and deliver timely insights across your data estate.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more about the new destinations in the Dataflow Gen2 data destinations and managed settings documentation.

Schema Support in Dataflow Gen2 Destinations: Lakehouse, Fabric SQL and Warehouse (Preview)

This powerful new capability makes it even easier to organize and manage your data within complex analytics environments. To take advantage of schema support, simply enable ‘Navigate using full hierarchy’ under Advanced settings when configuring the connection for your chosen data destination. With this enhancement, you can seamlessly map your data to the right schema, ensuring greater flexibility and alignment with your organizational standards. Explore this feature and let us know how it helps you optimize your data workflows.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more about schema support here in our advanced section in our documentation Dataflow Gen2 data destinations and managed settings.

Dataflow Gen2 – Natural language to custom column with Copilot (Generally Available)

Earlier this year we released a new copilot-driven experience that aims to help you in creating custom columns with just natural language. This experience is available within the ‘Custom column’ dialog which you can find through the ‘Add column’ tab of the ribbon. With it you can describe with a simple prompt what you wish your new column to calculate, and Copilot will create the formula for you.

A screenshot of a computer

Description automatically generated

Give it a try today within Dataflow Gen2 and let us know your experiences with it.

To learn more about this feature, refer to the Add a custom column documentation.

Dataflow Gen2 – Explain query and query steps using Copilot (Generally Available)

This new capability brings the power of AI directly into your data transformation workflows by helping you interpret Mashup (Power Query M) code in natural language. Whether you’re reviewing a full query or a specific step, Copilot makes it easier than ever to understand and debug your dataflows.

Key Features

  • Explain this query: Triggered via the Copilot pane or by right-clicking in the Queries pane.
  • Explain this step: Available by right clicking any step in the applied steps section of a query.
A screenshot of a computer

Description automatically generated

This feature is designed to empower data professionals of all levels to work more confidently and efficiently with Power Query M code.

To learn more about this feature, refer to the Copilot explainer skill in Dataflow Gen2 documentation.

Copilot in Modern Get Data (MGD) for Dataflow Gen2 (Preview)

With this new Copilot in MGD experience in Fabric Dataflow Gen2, you can ingest and transform data effortlessly with natural language commands. Discover a faster, smarter way to get the data you need.

In Fabric Dataflow Gen2, select Get data to begin. In the Get data wizard, select the Copilot tab, then you can start with the list of recently used tables.

A screenshot of a computer

AI-generated content may be incorrect.

After loading the recently used table, you can chat with Copilot to find the data you want. For step-by-step exploration, we want to first group by the data on customers’ titles to check the results. Then depending on the range of the counts, we can decide to include which ranges.

A screenshot of a computer

AI-generated content may be incorrect.

When selecting table columns, use @ to quickly view available columns. Then entering the letter can filter on detail column.

A screenshot of a computer

AI-generated content may be incorrect.

If you know all the operations you want to do in the beginning, you can describe all in one sentence. Then Copilot can quickly understand it and provide the filtered results to you.

A screenshot of a computer

AI-generated content may be incorrect.

To return to the previous step, select the ‘Restore’ button next to it and your data will revert to that point. You can also Copy the preview data to confirm with your colleagues before saving it into Dataflow Gen2.

Copilot in MGD offers transformation functions like Dataflow Gen2 Copilot.

For details, refer to the Dataflow Gen2 Copilot documentation.

Dataflow Gen2 – Postgre SQL Connector adds support for Microsoft Entra ID Authentication

You can utilize Microsoft Entra ID authentication to connect to PostgreSQL databases in Dataflow Gen2, as an alternative to traditional username and password authentication.

A screenshot of a computer

AI-generated content may be incorrect.

For more information, refer to the PostgreSQL connector documentation.

2-Tier Pricing Model for Dataflow Gen2 (CI/CD

A new 2-tier pricing model for Dataflow Gen2 has been introduced, with the aim of making query evaluation more affordable and transparent for users. This update is part of our ongoing commitment to respond directly to customer feedback and deliver more cost-effective solutions for diverse workloads.

  • First 10 Minutes: Query evaluation is now billed at just 12 CU—a 25% reduction from previous rates.
  • Beyond 10 Minutes: For longer-running queries, costs drop dramatically to 1.5 CU—a 90% reduction, making extended operations significantly more budget-friendly.

This pricing model is effective immediately for Dataflow Gen2 (CI/CD) operations. To take advantage of the new rates, users should upgrade any non-CI/CD items by using the ‘Save as Dataflow Gen2 (CI/CD)’ feature.

For further details on this update, please refer to the Dataflow Gen2 Pricing documentation article.

Pipelines

Microsoft Fabric Data Factory Data pipelines are now ‘pipelines’

Why the Change? We’re making Fabric Data Factory pipelines more inclusive and adaptable by simplifying language and broadening our mission. ‘Data pipelines’ are now simply ‘pipelines’, reflecting our commitment to extending Fabric Data Factory’s capabilities to fit more diverse and extended use cases. This update isn’t just semantic—it signals a broadened horizon where pipelines can orchestrate not just data, but also services, applications, and business processes.

By embracing a more unified terminology, we invite our community to imagine new possibilities, integrating data engineering with broader business and workflow automation in Fabric Data Factory.

Locations in the UI reflecting the change

Workspace Artifact Type

A screenshot of a computer

AI-generated content may be incorrect.

Creating New Items New Item creation

Workloads in Data Factory A screenshot of a computer

Description automatically generated

OneLake Catalog

OneLake catalog

Lineage view

Lineage View

New Activities in Email and Teams (Generally Available)

These updates introduce a modern UX and enhanced functionality—making it easier than ever to integrate communication seamlessly into your pipelines.

What’s New?

The new activities are designed to be more intuitive, flexible, and future-ready. Whether you’re sending notifications, triggering actions, or collaborating across Teams, these activities streamline how you connect communication with orchestration.

  • Modern UX: A refreshed interface that aligns with Fabric’s design principles.
  • DMTS Support: Seamless integration with Data Movement and Transformation Services.
  • Improved Scenarios: Enhanced support for common use cases like approvals, alerts, and status updates.

When using user auth and deploying a pipeline with the Email or Teams activity to another workspace, if you are not the user who created the activity in the source workspace, the target workspace will have the activity set to inactive until you create a new user auth connection in the target workspace.

Legacy Activities Get a Facelift

We’ve also updated the UX for legacy activities to help users distinguish between old and new experiences. You’ll notice clearer labels and visual cues:

  • Office 365 Outlook (Legacy)
  • Teams (Legacy)
  • Microsoft Teams
  • Office 365 Email
A screenshot of a computer

AI-generated content may be incorrect.

These legacy activities will remain available for a limited time, but we strongly encourage users to begin transitioning to the new versions.

To learn more, check out our documentation on the Office 365 Outlook activity and the Teams activity.

Dataflow activity in pipelines: New parameters experience

Introducing a major usability upgrade to the Dataflow activity in Microsoft Fabric! When using a Dataflow Gen2 with public parameters mode enabled, you’ll now benefit from a streamlined experience that makes working with parameters faster and more intuitive.

A screenshot of a computer

Description automatically generated

Thanks to the new Discover Dataflow Gen2 Parameters API, the Dataflow activity can now automatically detect and display all available parameters for the selected Dataflow.

  • Data types
  • Default values

No more manual entry or guesswork—just select your Dataflow and immediately see what’s available. This enhancement helps you configure your pipelines more efficiently and with greater confidence.

Try it out and experience the productivity boost firsthand!

To learn more about this feature, refer to the Dataflow activity documentation.

Debug your pipeline Expression with the Evaluate Expression experience

The Expression Builder Evaluate experience is now available in Microsoft Fabric pipelines! This new capability is designed to make working with dynamic pipeline content easier, faster, and more intuitive.

What is it?

We’ve heard from many of you that writing and debugging expressions can be frustrating. The evaluate expression experience helps alleviate some of these problems by:

  • Parsing expressions and showing how they resolve.
  • Auto-populating default values for parameters and variables.
  • Allowing manual input for runtime-specific values.
  • Providing schema previews.

The evaluate expression feature introduces a simple button that lets you test your pipeline expressions instantly. This tool helps you understand exactly how your expression will behave without needing to run the entire pipeline.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, check out the documentation for Evaluate pipeline.

Functions Activity with User Data Functions (Generally Available)

This milestone unlocks powerful new capabilities for building reusable, secure, and parameterized logic within your pipelines.

Easily invoke custom functions across multiple pipelines to streamline complex workflows. You’ll also be able to pass secure inputs and outputs, configure retry logic, and manage parameters with full control. With the integrated experience, you can select your workspace, function item, and parameters directly from the pipeline canvas.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about setting up a Fabric User Data Function, check out this article on Create a Fabric User data functions item. Or the blog post on Utilize User Data Functions in Data pipelines with the Functions activity.

Add up to 20 schedules for your pipeline!

You now have the capability to add multiple schedules to your pipelines in Fabric Data Factory!

With this new update, you can add up to 20 schedules to your pipeline, giving you greater flexibility to automate runs at different times. This enhancement allows teams to better align their data processing with organizational needs, ensuring timely data availability and improved operational efficiency.

Screenshot showing how to edit schedules.

On-premises and VNet data gateway support for Invoke pipeline and Semantic model refresh activities

Expanded secure connectivity options are now available in Microsoft Fabric Data Factory!
With support for On-premises Data Gateway (OPDG) and Virtual Network (VNet) Data Gateway, you can orchestrate two of the most popular activities—Invoke pipeline and Semantic Model Refresh—while keeping your data secure inside your corporate network.

Why This Matters

  • Cross-Workspace pipeline Orchestration: From your Fabric pipeline, you can now invoke pipelines hosted in other workspaces—whether in Fabric, Azure Data Factory (ADF), or Synapse—even when those workspaces are secured with Managed VNets. This enables secure connectivity to on-premises or network-isolated resources without exposing any public endpoints.
  • Semantic Model Refresh: Leverage the Semantic Refresh activity to refresh models within Fabric workspaces that are secured with Managed VNets, ensuring secure and compliant data access
  • Parity with Azure Data Factory: Enjoy familiar patterns and security best practices, now available in Fabric.

Prerequisites

To get started, you’ll need:

  • A Fabric workspace with permissions to create and use connections.
  • An OPDG or VNet data gateway installed and configured by your tenant admins or networking team, and visible to your workspace.

Setup Resources

How to Use Gateway Connections

1. Invoke pipeline Activity

In Fabric, open your pipeline and add the Invoke Pipeline activity. You can invoke another Fabric pipeline, Azure Data Factory pipeline, or Synapse pipeline.

Create Invoke Pipeline activity

Choose Your Data Gateway: Under the Connection settings select your OPDG or VNet gateway, connect, and run! Choose gateway connection for invoking Fabric data pipeline

Choose gateway connection for invoking Azure Data Factory pipeline
Choose gateway connection for invoking Synapse pipeline

2. Semantic model refresh

Similarly, when refreshing a semantic model, you can select your OPDG or VNet gateway connection to securely access private data sources. Choose gateway connection for Semantic model refresh

Ready to try it?
Set up your gateway, configure your connections, and start orchestrating secure data activities in Fabric today!

Copy job Activity (Preview)

This new orchestration activity simplifies data movement by bringing the familiar Copy job item directly into Microsoft Fabric Data Factory pipelines. You can now manage data transfers alongside transformations, notifications, and more—all within a single, unified experience.

A screenshot of a computer

AI-generated content may be incorrect.

The Copy job activity includes a monitoring link that gives you real-time visibility into your Copy job progress and status. You can track execution outcomes, monitor performance, and quickly identify issues.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect. To learn more about the new Copy job activity, check out our documentation on Copy job Activity in Data Factory pipelines

Invoke pipeline activity (Generally Available)

This release marks a major milestone, empowering users to seamlessly trigger and orchestrate pipelines within their Fabric Data Factory workflows, making it easier than ever to build robust and flexible data solutions.

A screenshot of a computer

AI-generated content may be incorrect.

For more information, refer to the Invoke pipeline activity documentation

Support for Workspace Identity

Introducing support for Workspace Identity across key Microsoft Fabric Data Factory activities! With this update, you can now leverage workspace identity to securely and seamlessly execute Invoke Pipeline, Semantic Model, and Scope Activity operations. This enhancement simplifies authentication, strengthens security, and streamlines management by allowing you to use a unified identity when orchestrating data workflows.

Workspace identity support reduces the need for manual credential management, making it easier to build and maintain robust pipelines that span multiple Fabric services. Whether you’re invoking complex pipelines, accessing semantic models, or setting up scope activities, Workspace Identity ensures consistent access control and governance throughout your environment.

Variable Library integration with pipelines (Generally Available)

With this release, users can now seamlessly manage and reuse variables across multiple pipeline activities, simplifying workflow design and enhancing flexibility. The integrated Variable Library empowers teams to standardize variable usage, reduce errors, and streamline pipeline configuration, making it easier than ever to build scalable, maintainable data solutions.

Azure Databricks Jobs activity (Generally Available)

Databrick Jobs allow you to schedule and orchestrate a task or multiple tasks in a workflow in your Databricks workspace. Since any operation in Databricks can be a task, this means you can now run anything in Databricks via Fabric Data Factory, such as serverless jobs, SQL tasks, Delta Live Tables, and more.

You can find the Job type in your Azure Databricks activity under the Settings tab.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, check out the Azure Databricks activity documentation or the Orchestrate your Databricks Jobs with Fabric Data pipelines! demo.

Apache Airflow Job

New Fabric Airflow features make it easy to build DAGs

Fabric Data Factory orchestration capabilities focus on more than just scaled-out low-code pipelines, which we’ve offered for many years in ADF & Fabric Data Factory. We also encourage code-first Python-based DAG orchestration by using Apache Airflow Job in Fabric Data Factory.

Fabric Notebooks now seamlessly integrate with your Airflow DAGs, enabling enhanced collaboration, exploration, and automation.

With just a few clicks, users can now easily embed Python code to call Fabric Notebooks directly within their Airflow workflows, leveraging rich data exploration and transformation capabilities right where orchestration happens.

Start by selecting the ‘Add Connection’ helper button on the toolbar inside our built-in DAG editor to add your SPN connection to your Notebooks.

A close up of a box

AI-generated content may be incorrect.
A screen shot of a video game

AI-generated content may be incorrect.

Fabric Apache Airflow Job now supports CI/CD

This new capability empowers data engineers and developers to seamlessly automate deployment and management of Airflow workflows, integrating best DevOps practices directly within the Fabric environment.

In Fabric, you have two tools to support CI/CD: Git integration and deployment pipelines. Git integration lets you connect to your own repositories in Azure DevOps or GitHub. Deployment pipelines help you move updates between environments, so you only update what’s needed.

With CI/CD support, you can enable faster iteration, maintain version control, and ensure reproducibility for your data orchestration pipelines—all while reducing manual effort and minimizing deployment risks.

To learn more about this feature, refer to the CI/CD for Apache Airflow in Data Factory in Microsoft Fabric documentation.

Copy job

Data Replication from Fabric Lakehouse with Delta Change Data Feed (Preview)

Copy job now supports Fabric Lakehouse Table connector with native CDC support. This connector enables efficient, automated replication of changed data—including inserts, updates, and deletes—from a Fabric Lakehouse via Delta Change Data Feed (CDF) to supported destinations. With this enhancement, your destination data stays continuously up to date—no manual refreshes, no extra effort. Making your data integration workflows more efficient and reliable.

What this means

  • Replicate data changes seamlessly from your Lakehouse after processing is completed in OneLake.
  • Distribute updated data to supported destinations outside Fabric, such as SQL or Snowflake.
  • Save time and reduce complexity with automated, change-aware data movement.
A screenshot of a computer

AI-generated content may be incorrect.

This new CDC connector brings you the flexibility to keep downstream systems in multi-cloud environment in sync—ensuring your data is always accurate, timely, and ready for action.

To learn more, refer to the Change data capture (CDC) in Copy job documentation.

Connection Parameterization with Variables library for CI/CD (Preview)

Copy job now supports connection parameterization via variable library! This powerful capability helps automate your CI/CD processes by externalizing connection values. With it, you can deploy the same Copy job across multiple environments while relying on the variable library to inject the correct connection for each stage. Meaning you can seamlessly use different data stores for development, testing, and production—without modifying your Copy job each time.

Capabilities

  • Parameterize connection using variables stored in the variable library in Fabric.
  • Promote Copy job seamlessly across environments—for example, from Dev to Test to Production—without hardcoding or manually editing data store connections.
  • Centralize configuration management, reducing duplication with a unified approach that makes it easier to manage configurations consistently across different environments.
A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the CI/CD for Copy job in Data Factory documentation.

Merge data into Snowflake

You can now choose to merge changed data—including inserts, updates, and deletions—into Snowflake, when the data originates from any CDC source connectors such as Azure SQL DB, SQL Server, SQL MI, or Fabric Lakehouse tables.

A screenshot of a computer

AI-generated content may be incorrect.

What’s more, with Storage Integration support in the Snowflake connector for Copy job, you gain enhanced security through a Snowflake-assigned role. This eliminates the need to expose sensitive credentials and allows you to implement more secure authentication methods when connecting to Azure Blob Storage.

For more information, check out the following resources CREATE STORAGE INTEGRATION or Change data capture (CDC) in Copy job.

More Connectors, More Possibilities

More source and destination connections are now available, giving you greater flexibility for data ingestion with Copy job. We’re not stopping here—even more connectors are coming soon!

A screenshot of a computer

AI-generated content may be incorrect.

Newly supported connectors

  • Folder
  • REST
  • SAP Table
  • SAP BW Open Hub
  • Amazon RDS for Oracle
  • Cassandra
  • Greenplum
  • Informix
  • Microsoft Access database
  • Presto

Incremental copy now supported for more connectors

  • SAP HANA
  • MariaDB
  • MySQL
  • SFTP
  • FTP
  • Oracle cloud storage
  • Amazon S3 Compatible

Learn more from the What is Copy job in Data Factory documentation.

Simplified Copy Assistant, Powered by Copy job

Access the full power of the Copy job by selecting ‘Copy Assistant’ from a pipeline.
This streamlined experience makes it easier to configure and manage data movement within your workflows. Eliminating the need for unnecessary parameterized foreach loops and copy activities as before for simple data copying. It also empowers you to benefit from all Copy job capabilities, including native incremental copy and change data capture (CDC).

A screenshot of a computer

AI-generated content may be incorrect.

Learn more from the What is Copy job in Data Factory documentation.

Connectivity

New connectors available in Data Factory

The addition of a wide range of new connectors in Fabric Data Factory expands the options available for various data integration scenarios.

These new connectors that are now supported across Copy job, Copy activity, and Lookup activity in Pipeline, giving you even greater flexibility to connect with diverse data sources.

Generally Available

  • Amazon RDS for Oracle (Bring your own driver)
  • Cassandra
  • Greenplum
  • HDFS
  • Informix
  • Microsoft Access database
  • Presto
A screenshot of a computer

AI-generated content may be incorrect.

With these additions, Dataflow Gen2 continues to grow as a powerful data preparation and transformation engine, designed to meet the needs of modern data integration. By expanding the connector portfolio, we’re giving you the flexibility to seamlessly connect to the tools and platforms your business depends on, while ensuring performance, reliability and security.

Generally Available

  • Snowflake 2.0

Preview

  • Google BigQuery 2.0
  • Impala 2.0
  • Netezza (Bring your own driver)
  • Vertica (Bring your own driver)
  • Oracle (built-in driver) – OPDG only

Learn more about the connector availability across Data Factory in the Connector overview documentation.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more about the connector availability across Data Factory in the Connector overview documentation.

Salesforce & Salesforce Service Cloud connector: Partition auto detection

We are dedicated to enabling organizations to realize the complete potential of their data by providing integration solutions that are seamless, dependable, and scalable. To keep pace, we continuously ship connector innovations to simplify the developer’s experience and improve productivity.

Now, both Salesforce and Salesforce Service Cloud connectors now support reading data with partitions. This enables users to pull data from Salesforce tables using multiple threads for significantly improved performance. Best of all, there’s no need to manually configure partition details. The connector intelligently detects and applies the optimal partitioning strategy.

This update greatly simplifies the integration experience while delivering faster throughput. Available as an advanced setting in the Salesforce connector for copy activity, we highly recommend leveraging this feature for long-running copy tasks that can benefit from multi-threaded reads.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more details about this feature in the Partition option settings for Salesforce Connector documentation.

Support upsert to delta table with Lakehouse connector (Preview)

Fabric Lakehouse connector is one of the core connectors that empowers enterprise to bring data into Fabric OneLake. We’re continuously delivering innovations to simplify data ingestion.

The latest enhancement adds upsert support to the Lakehouse connector, allowing you to write directly to Delta tables. With upsert, you can efficiently manage incremental data loads and maintain consistency—without relying on complex workarounds.

This powerful capability is available in both Copy job and Copy activity within Pipeline.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more about this feature in the Table action settings from Lakehouse Connector documentation.

Support delta column mapping and deletion vector with Lakehouse connector

The Lakehouse connector now supports delta column mapping and deletion vectors, further strengthening its ability to work seamlessly with delta tables in Fabric OneLake. With column mapping, you gain flexibility to handle schema evolution and align columns across different systems without manual adjustments.

Deletion vector support ensures that data operations remain accurate and consistent, even when rows are deleted or updated. These enhancements reflect our ongoing commitment to evolve the Lakehouse connector, with more delta table capabilities to help you manage and analyze your data even more effectively.

Learn more details about this feature in the Lakehouse connector documentation.

Support varchar(max) in table creation with Data Warehouse connector (Preview)

The Fabric Data Warehouse connector now supports the varchar(max) data type during table creation, giving you greater flexibility to handle large text data without truncation. Users can specify this data type in column mapping when creating tables, whether using Copy job or Copy activity in Pipeline. Making it easier to ingest and store extensive text fields such as detailed logs, descriptions, or notes directly into your warehouse, streamlining data integration for real-world scenarios.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more details about this feature in the Data warehouse connector documentation.

DB2 connector: support to specify for package collection

The DB2 connector now allows users to specify a package collection directly within the pipeline, providing greater control and ensuring better alignment with DB2 database configurations. This capability is available as an advanced setting in copy activity, enabling more precise and efficient data integration.

Screenshot showing additional connection properties for source.

Learn more details about this feature in the DB2 connector documentation.

Snowflake connector: support to specify for role

The Snowflake connector now supports specifying a role directly within your pipeline. This enables users to run the copy with right level of access, helping stay aligned with the organization’s security and governance practices without extra configuration steps. This is now an advanced setting available in copy activity.

Screenshot showing additional connection properties for source.

Learn more details about this feature in the Snowflake connector documentation.

Gateways

Virtual Network Data Gateway supports Fabric pipeline and Copy job (Generally Available)

You can now use Pipeline and Copy job with the Virtual Network (VNET) Data Gateway in Microsoft Fabric. Enabling a secure, high-performance way to move data between private networks and Fabric without exposing it to the public internet. This integration is especially valuable for industries with strict compliance requirements—such as finance, healthcare, and government. It ensures data remains within private network boundaries, eliminating the complexity of VPNs and reducing the risks associated with public endpoints.

With end-to-end security, faster transfers via private endpoints or ExpressRoute, and a simple setup that works seamlessly with existing Pipeline and Copy job configurations, you can move your data with confidence, compliance, and speed.

To learn more, refer to the Use virtual network data gateway with pipeline in Fabric documentation.

Mirroring

Mirroring for Google BigQuery (Preview)

This new capability allows customers to continuously replicate BigQuery data into OneLake —Fabric’s Unified Data Lake—with zero ETL.

With near real-time replication and native integration across the Fabric experience, Mirroring makes it seamless to bring BigQuery data into Fabric.

This unlocks the full power of Microsoft’s integrated suite for analytics, processing, and reporting—enabling you to derive insights faster and more efficiently than ever before.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more and to get started, please reference the Mirroring for Google BigQuery (Preview).

Mirroring for Oracle (Preview)

This new capability empowers customers to continuously replicate data from their Oracle databases—including on-premises, Oracle OCI, and Exadata—directly into OneLake, Fabric’s unified data lake, with near real-time performance and zero ETL.

Mirroring for Oracle is natively integrated across the Fabric experience, making it seamless to bring Oracle data into Fabric. With rapid replication and instant availability in the data warehouse view, you can unlock the full power of Microsoft’s analytics, processing, and reporting suite—enabling faster, more efficient insights from your Oracle data.

Supported environments include Oracle on-premises, Oracle OCI, and Exadata. The setup is straightforward, with guidance available for enabling necessary database configurations and permissions.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more and to get started, please reference our blog post on Mirroring for Oracle (Preview).

Mirroring for Azure SQL Managed Instance (Generally Available)

This milestone marks a significant step forward in our mission to provide seamless, near-real-time data replication capabilities, empowering you to derive maximum value from your SQL data with Microsoft’s unified data platform.

Mirroring for Azure SQL Managed Instance offers continuous data replication into OneLake, ensuring that your data remains current and readily accessible for advanced analytics and reporting needs without complex ETL processes.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more and to get started, reference Mirroring Azure SQL Managed Instance (Preview) documentation.

Mirroring support for sources behind a firewall (VNET and OPDG)

With this release, organizations can now securely and efficiently replicate data from key sources – with general availability for Snowflake, Azure SQL Database, and Azure SQL Managed Instance – using either the On-Premises Data Gateway (OPDG) or the VNET Data Gateway. This ensures seamless data movement into OneLake, while maintaining robust security and compliance for your most critical workloads.

Whether your databases reside on-premises or within a virtual network, Mirroring in Fabric provides flexible connectivity options to enable encrypted, high-throughput, and low-latency connections – without exposing your data sources directly to the internet. This unlocks real-time analytics, reporting, and AI across your entire data estate.

And this is just the beginning: every new source supported by Database Mirroring will also support these gateway options, so you can expect even broader coverage in upcoming releases.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more and to get started, reference the Mirroring Azure SQL Managed Instance (Preview) documentation.

Azure SQL Database mirroring now supports Workspace Identity authentication

You can now use Fabric Workspace Identity authentication to mirror your Azure SQL Database, in addition to the basic (username and password), organization account and service principal authentication options.

Workspace Identity authentication in connections leverages Microsoft Entra ID to provide seamless, secure access to data sources using your Fabric workspace’s managed identity. This modern authentication approach eliminates the need for storing credentials while providing fine-grained access control and comprehensive audit capabilities.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Workspace identity and Tutorial: Configure Microsoft Fabric Mirrored Databases from Azure SQL Database documentation.

Developer tooling

Fabric VS Code extension (Generally Available)

We’ve heard your feedback and have made several new features, and improvements to Fabric VS Code Extension. The extension offers enhanced capabilities for managing Fabric items, multi-workspace support, and direct integration with Fabric SQL databases—all within Visual Studio Code.

Key features

  • Programmatic Management: Manage Fabric items programmatically using item definitions, enabling scripting and file-based workflows. This makes it easier to edit in VS Code and publish your changes.
  • Git integration: You can clone your Git enabled workspace using the extension and use VS Code’s source control experience to work with your items and push the changes to the repository. Note that only Azure DevOps is supported for now.
  • Multi-Workspace Support: View and filter multiple Fabric workspaces simultaneously within the extension. You can easily switch tenants to work on workspaces and items across tenants.
  • SQL Database Integration: Directly open and work with Fabric SQL databases in VS Code with the SQL Server extension.
A screenshot of a computer

AI-generated content may be incorrect.

Figure: View and manage your workspaces and item in VS Code

Check out the documentation for What’s new in Microsoft Fabric extension (Generally Available) to learn more.

Extensibility

Lucid Data Hub’s Fabric Workload (Preview)

Lucid Data Hub has launched Agent Mart Studio as a workload in Microsoft Fabric, empowering business users to build and deploy AI agents directly on enterprise data in OneLake. This integration allows organizations to automate complex business processes with AI agents that leverage unique industry knowledge and are protected by contextual guardrails.

The Lucid Data Hub + Microsoft Fabric: Empowering Business Users with AI Agents blog post highlights how Lucid’s no-code environment enables non-technical users to create and customize agents for real-time insights and workflow automation, such as retail out-of-stock detection.

Check out the short Agent Mart Studio demo.

 

Statsig’s Experimentation Analytics on the Fabric Workload Hub (Preview)

Statsig Experimentation Analytics on Microsoft Fabric offers an integrated solution that empowers product teams to innovate and accelerate data-driven decision making by unifying experimentation, feature rollout, and impact analysis within the Microsoft Fabric ecosystem. It provides a frictionless, secure way to store and analyze experimentation data by providing a warehouse native experience with the ability to define custom metrics and support rigorous statistical testing, analysis and visualization of product and behavioral data stored in OneLake without the need for data movement or running complex ETL pipelines.

Start using the workload Statsig’s Product page.

Closing

We hope that you enjoy the update! Be sure to join the conversation in the Fabric Community and check out the Fabric documentation to get deeper into the technical details.

As always, keep voting on Ideas to help us determine what to build next. We are looking forward to hearing from you!

Bài đăng blog có liên quan

Fabric September 2025 Feature Summary

tháng 11 10, 2025 của Arun Ulagaratchagan

SQL is having its moment. From on-premises data centers to Azure Cloud Services to Microsoft Fabric, SQL has evolved into something far more powerful than many realize and it deserves the focused attention of a big stage.  That’s why I’m thrilled to announce SQLCon, a dedicated conference for database developers, database administrators, and database engineers. Co-located with FabCon for an unprecedented week of deep technical content … Continue reading “It’s Time! Announcing The Microsoft SQL Community Conference”

tháng 11 3, 2025 của Arshad Ali

Additional authors – Madhu Bhowal, Ashit Gosalia, Aniket Adnaik, Kevin Cheung, Sarah Battersby, Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis. … Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)”