Microsoft Fabric Updates Blog

If you’ve been trying to keep up with everything shipping in Microsoft Fabric, this January 2026 round-up is for you—covering the biggest updates across the platform, from new AI-powered catalog experiences and OneLake governance improvements to enhancements in Data Engineering, Data Warehouse, Real-Time Intelligence, and Data Factory. If you haven’t already, make sure FabCon Atlanta is on your calendar (March 16–20, 2026)—it’s shaping up to be an epic week for the Fabric community.

Contents

Events & Announcements

Upgrade your skills and certifications before FabCon

Join live sessions with your favorite MVPs and other Fabric experts and request voucher to take the Fabric Analytics Engineer (DP-600) exam or the Fabric Data Engineer (DP-700) exam. Voucher supplies are limited.

Register for a live session.

Submit your request for a Fabric exam voucher.

Have you registered for FabCon Atlanta yet?

Join us from March 16-20, 2026, in Atlanta, GA! For the first time ever, SQLCon will be co-located with FabCon, bringing together the entire data community in one place.

Two Conferences. One pass. One Epic Week. Join us for the ultimate Microsoft Fabric, SQL, Power BI, Real-Time Intelligence, AI, and Databases community-led event.

Master SQL Server 2025 internals in the morning, dive into Fabric innovations in the afternoon, attend Power Hour before dinner and network with peers from both communities. The sessions you choose are totally up to you.

Register with code FABCOMM to save $200. Prices go up on February 14th.

General

Microsoft acquires Osmos to extend Fabric with agentic data engineering

Microsoft has acquired Osmos, an agentic AI data engineering platform designed to help simplify complex and time-consuming data workflows. Together, we’re accelerating the future of autonomous data engineering directly in Microsoft Fabric, helping customers turn data in OneLake into analytics- and AI-ready assets faster.

To learn more, read the acquisition announcement.

Fabric Platform

AI Auto-Summary for semantic models (Preview)

The Auto-Summary for semantic models is an AI-generated high-level summary that helps you quickly understand an item’s purpose and main characteristics without opening the item or reviewing its full metadata. It makes it easier to understand unfamiliar items and compare them directly in the OneLake catalog Explorer.

The summary is created based on the item’s metadata and structure. Users with the appropriate Copilot capacity and permission can generate the summary from the quick actions in the main explore tab or directly from the semantic model’s item details page. Each time you return to the Catalog, a new summary can be generated so that you always see the most up-to-date version.

After a summary is generated, you can generate another version, copy the text for use elsewhere, or provide feedback on the quality.

Parent-Child hierarchy in OneLake catalog

OneLake catalog now includes a clear Parent–Child item structure that makes it easier to understand how your data items relate to each other. Instead of showing everything in one flat list, the catalog now groups connected items together and displays them in the appropriate hierarchy.

For example, a Lakehouse appears with its autogenerated SQL Analytics Endpoint, and an Eventhouse appears with its related KQL DBs. You can easily view or hide these related items using the expand and collapse feature, giving you a cleaner view when you need it and more detail when you want it. This helps you quickly find what you need, reduces confusion between similarly named items, and makes it clearer which item you should connect to for each task.

Learn more from the OneLake catalog documentation.

Item Reference Variable Type (Preview)

The Variable Library in Microsoft Fabric continues to evolve, and one of the most impactful additions arriving with the upcoming release is the Item Reference variable type, a new way to reference Fabric items in a structured, resilient, and more secure way.

This feature simplifies managing configuration across different environments by allowing you to reference Fabric items directly instead of hard‑coding values as strings. We will also validate permissions to the referenced item, ensuring stricter governance on changing the values being used within variables.

Why this matters
The new Item Reference variable addresses these issues:

  • Stronger security: Variables can only reference items that the user is authorized to access.
  • Better clarity: Instead of obscure GUIDs shown in the UI, references clearly provide meaningful information of the item name, type, and its location. For different value sets, simply choose the item in the proper workspace.
  • Improved reliability: Structured metadata eliminates issues caused by renaming items or manually editing IDs.

Better experiences across Fabric
The Fabric ‘Select variable’ dialog lets you choose a reference from only the items you’re permitted to access and surfaces helpful details such as item name, type, and location, so you can confidently select the right one. In addition, the experience of configuring it across Fabric in UDF, Lakehouse shortcuts, and other Fabric items is unified through the following dialog where you can easily filter and search your variable:  

Key points

  • You must have at least READ permission on an item to set it as a value.
  • When deploying or updating, Fabric validates that all referenced items in the target stage active value-set exist, and that you have permission for them.
  • Early support will be available in Lakehouse Shortcuts, UDF code and Notebook code (Not supported in %%Configure).
  • Upcoming items can be tracked in the documentation.

Looking ahead
Item reference is the first step toward unified, scalable configuration in Fabric. Next, we will bring Connection reference, a new variable type for managing external connections (AWS, Blob Storage, and more) with the same secure and consistent experience. Stay tuned!

Ready to get started?
The Item Reference variable type is more than “just another variable”. is a foundational step toward a more maintainable, predictable, and scalable Fabric ecosystem. With its structured design, CI/CD-friendly behavior, and shared resolution model, it removes many long-standing pain points around configuration, item linking, and environment consistency. Explore the Variable Library in your Fabric workspace and try creating your first Item reference variable today!

Git integration – enhanced support for GitHub Enterprise Cloud with data residency

Fabric workspaces can now connect to GitHub Enterprise Cloud instances with data residency (ghe.com), allowing regulated customers to use Microsoft Fabric – Git Integration.


To learn more, refer to the Microsoft Fabric and GitHub Enterprise Cloud with data residency support.

Git integration – Commit to standalone branch

We have introduced a highly requested flexibility feature that allows users to create a new branch from their last synchronization point and commit current changes to it in a single action.

In practice, this means:

  • No commitment to the connected branch
  • Branch off on the fly without switching context
  • Commit to another branch even if there are incoming updates

This new functionality can be useful for scenarios like:

  • Handling conflict situation
  • Working in isolation
  • Backing up your current work on specific branch
  • Sharing your work with others

To learn more, refer to the Microsoft Fabric – Get started with Git Integration – commit to standalone branch.

Python SDK for Microsoft Fabric REST API (Preview)

Python SDK is now available on PyPI as microsoft-fabric-api and you can get started today.

The Microsoft Fabric Python SDK is a client library that wraps the Fabric REST APIs, allowing developers to manage, automate, and interact with Microsoft Fabric resources directly from Python. Instead of hand-crafting HTTP requests and managing authentication manually, you can use idiomatic Python objects and methods to perform common tasks.

This SDK is currently in preview and is installed from PyPI, microsoft-fabric-api · PyPI

The Fabric REST APIs support automation of tasks:

  • Listing and managing workspaces
  • Automating deployment processes
  • Interacting with Fabric items programmatically
  • Integrating workflows into CI/CD and DevOps pipelines

The new SDK offers:

  • Built-in support for authentication via Azure identity libraries
  • Typed API clients for REST endpoints
  • Simplified serialization/deserialization
  • Higher-level helpers that make script automation easier

Install the SDK

You can install the SDK using pip:

Bash

pip install microsoft-fabric-api

This will grab the latest public preview release from PyPI.

Getting Started

The following is a quick example to help you start exploring Fabric resources.

Step 1: Authenticate

The SDK uses Azure identity libraries (such as DefaultAzureCredential) to get tokens automatically. Make sure you’re logged in via Azure CLI or have environment credentials set.

python

from azure.identity import DefaultAzureCredential

from microsoft_fabric_api import FabricClient

# Create your credential (Azure CLI login, managed identity, etc.)

credential = DefaultAzureCredential()

Initialize the client

fabric_client = FabricClient(credential)

Step 2: List Workspaces

Once authenticated, you can call Fabric REST APIs through the SDK. For example, to list workspaces you have access to:

python

workspaces = fabric_client.core.workspaces.list_workspaces()

print(f”Found {len(workspaces)} workspaces:”)

for ws in workspaces: print(f”- {ws.display_name} (Capacity ID:{ws.capacity_id})”)

This example demonstrates how straightforward it is to work with Fabric resources using Python.

OneLake

Granular APIs for OneLake security (Preview)

Microsoft Fabric is introducing new granular REST APIs for security role management, giving developers finer control over how OneLake permissions are created, retrieved, and managed programmatically. In addition to the existing batch role API, Fabric now supports discrete Get, Create, and Delete role operations, enabling users to work with individual roles without submitting a full role collection.

These new APIs make it easier to build automation‑friendly security workflows and integrate Fabric security into CI/CD pipelines. By combining bulk and granular role management options, Fabric offers greater flexibility for organizations managing security at scale—reinforcing our commitment to open, developer‑first, and interoperable security in OneLake.

API swagger listing the GET dataAccessRoles/{roleName}, POST dataAccessRoles/, and DELETE dataAccessRoles/{roleName}

Granular APIs to get, create, and delete a single role.

Check out the OneLake Data Access Security documentation.

OneLake security support for Mirrored item types

Microsoft Fabric now supports defining OneLake data access roles on all mirrored item types, extending granular, role-based security to data replicated into OneLake from transactional systems. With this update, customers can control access to mirrored data using table-, row-, or column-level security, ensuring permissions are enforced consistently at the OneLake layer regardless of how the data is consumed.

A user managing OneLake security roles for their mirrored database item.

Manage data access roles in your mirrored database.

By attaching security directly to mirrored data in OneLake, this release enables secure reuse across teams through shortcuts and downstream analytics experiences.

Organizations can mirror data once, apply fine-grained access controls at the source, and confidently share data without duplication—simplifying governance while scaling analytics across Fabric.

You can learn more about OneLake security in the documentation.

OneLake diagnostics immutable logs

OneLake diagnostic events can now be made immutable, which means that the JSON files that contain diagnostic events can’t be tampered with, or deleted, during the immutability retention period.

To learn more, please refer to OneLake diagnostics documentation.

Data Engineering

High Concurrency mode for Lakehouse operations

High Concurrency mode for Lakehouse operations in Microsoft Fabric is a new capability designed to dramatically optimize Spark resource utilization for common tasks like Load to Table and Preview.

Previously, when these operations fell back to Spark execution—especially in environments with Managed Virtual Networks—users often faced significant startup latency of three to five minutes per job or found that a single preview operation could hold a session for up to 20 minutes, blocking concurrency on smaller capacities.

High Concurrency mode solves this by allowing up to five independent Lakehouse jobs triggered by the same user within the same workspace to share a single, underlying Spark session.

This innovative approach delivers immediate improvements in both performance and cost-efficiency. By reusing existing sessions, subsequent table loads or previews can start in under five seconds, even with network security features enabled.

Furthermore, this mode offers a significant price-performance advantage: only the initiating Spark session that starts the shared application is billed, meaning subsequent operations sharing that session incur no additional compute costs.

These optimized sessions are automatically managed by Fabric and are easily traceable in the Monitoring hub via the HC_<lakehouse_name> naming convention, ensuring you have full visibility into your accelerated workflows.

To learn more, refer to the High concurrency mode for Lakehouse operations in Microsoft Fabric documentation.

Fabric connection inside Notebook (Preview)

With this update, notebooks now offer the familiar Get Data feature, making it simpler and safer for users to access data from frequently used sources like Azure Blob Storage, PostgreSQL, Azure Key Vault, and S3.

This update supports connections to cloud data sources. If you need to access on-premises data sources, please use the Managed Private Endpoint option.

Create a Fabric connection inside notebook with the new “Connection” flow. Once the connection is ready, you can generate the code snippet to access the underlying data source. Apply the connection’s credential detail to other data source, if applicable.

Create a Fabric Connection with the build-in flow inside notebook. User can choose the data source and provide the authentication detail to create the connection. The connection is attached to the Notebook after this flow.

Create Fabric connection inside Notebook.

Generate the python code snippet from the connection. The credential detail will be retrieved from the connection and used to query the data source which this connection is set up for.

Generate code snippet to query data source.

The supported authentication types include:

  • Basic Authentication: Supported for Azure SQL Database and other databases that support basic authentication
  • Account Key Authentication: Supported for REST API data sources that require Account key authentication
  • Token Authentication: Supported for data sources that require token-based authentication
  • Workspace Identity Authentication: Supported for Fabric workspace identity authentication
  • Service Principal Authentication (SPN): Supported for data sources that require SPN-based authentication

You can also create the connection inside the Fabric data source management page, but you need to ensure the toggle named Allow Code-First Artifacts like Notebooks to access this connection (Preview) has been enabled. This toggle can only be set during the creation of the connection and can’t be modified later.

Enable this connection can be used inside the code-first artifact such as Notebook. The connection can be listed and used inside notebook only with this toggle is selected.

Allow code-first artifact like Notebooks to access this connection.

After creating the connection, it appears under Global permissions, ready to link to the notebook. Select Connect in the context menu to link this connection to the notebook.

To learn more, refer to the Fabric connection inside Notebook documentation.

Open and edit workspace’s Notebook inside VS Code

Users can now directly open and edit the Fabric Notebook within its VS Code editor. Previously, the Fabric Data Engineering VS Code extension enabled notebook development in VS Code only after downloading the notebook. With this update, the notebook can be accessed and edited from the selected remote workspace, and any changes saved in VS Code will automatically update the content in the remote workspace.

To open the notebook, select the Open Notebook Folder icon located on the toolbar of the desired Notebook.

Open this notebook folder in the VFS(Virtual File System) mode. with this view, the notebook content is open in the VS Code explore view without downloading to local desktop.

Direct Open Notebook

Once activated, the VS Code Explorer View opens the selected Notebook and shows the Fabric workspace with its notebooks, including the one currently open. With this new view, you can open multiple notebooks from the same Fabric workspace, and even open multiple different Fabric workspaces.

Multi-Notebook and Multi-workspace view

Open multiple fabric workspaces and the notebooks from these workspaces inside the same VS Code window.

Create or replace semantics support for Materialized Lake View

Materialized Lake Views now support create or replace semantics, making it significantly easier to evolve data models as business requirements change. This enhancement allows users to update an existing Materialized Lake View—whether that involves adding or modifying columns, adjusting transformation logic, or updating metadata—without the need to drop and recreate the view. As a result, it becomes faster to iterate while avoiding the operational risks and disruptions that typically come with deleting and rebuilding core analytic objects.

For more information, refer to the Spark SQL Reference for Materialized Lake Views documentation.

Lineage enhancements in Materialized Lake views

Lineage for materialized lake views now clearly shows the Notebook Source, making it simple to trace each view to its origin. Deleted sources are also flagged for quicker troubleshooting and more reliable refresh scheduling.

Learn more about this feature in the materialized lake views documentation.

Data Warehouse

Proactive statistics refresh

Proactive statistics Refresh for the Data Warehouse and Lakehouse SQL Analytics Endpoint is a built-in optimization that enriches the automatic management of vital column statistics. With this feature enabled, column statistics that were created during SELECT queries may now be updated by the engine proactively as their data changes. This reduces the likelihood of a query being prolonged by statistic maintenance during plan generation, thus reducing query execution time. Now more than ever, there are fewer reasons to maintain statistics manually.

See the statistics documentation for more information.

Incremental statistics refresh

Incremental statistics refresh is a performance enhancement in Data Warehouse and Lakehouse SQL Analytics Endpoint that improves the execution time of certain column statistic updates. Columns in long tables that experience mostly INSERT or ADD operations since the last statistic refresh are eligible for incremental refresh. As statistic operations can sometimes contribute to SELECT queries’ execution time, queries over these types of tables will benefit most from this improvement. 

A “before” visualization showing that the entire column segment is re-sampled for statistic refreshes, and an “after” visualization showing that now only the newly added rows are sampled for statistic refreshes.

Incremental statistics refresh is a quicker mode of automatically updating statistics, compared to before.

See the statistics documentation for more information.

Result set caching (Generally Available)

In 2025, we introduced result set caching (Preview) for Data Warehouse and Lakehouse SQL Analytics Endpoint. This feature expedites repetitive queries by quickly returning cached results instead of recomputing original queries from scratch. Out-of-the-box performance booster is now generally available and enabled by default. No tuning or configurations required. Enjoy the benefits of result set caching today!

View of the query editor in Microsoft Fabric, where a T-SQL query is being run with result set caching enabled. The query completes quickly in about 1.5 seconds, and the message “Result set cache was used” is visible in the Message Output, to indicate that result set caching was applied.

Result set caching is now enabled by default, boosting eligible queries whenever possible

To learn more about this feature, see the result set caching documentation.

MERGE Transact-SQL (Generally Available)

In September 2025, the MERGE statement was released for preview in Data Warehouse. This command provides a standardized approach to transforming your data by incorporating conditional logic and DML actions all within a single statement.

Graphical visualization of multiple Data Modification Language statements happening one by one: Deleting one row from a table, then Inserting two rows to a table, then Updating one row in a table. Then another visual below that showing all 3 operations happening at once with the MERGE command.

MERGE encapsulates INSERTs, UPDATEs, and DELETEs all within a single statement.

For more information, see MERGE (Transact-SQL) – SQL Server.

Real-Time Intelligence

MQTT v3 support for the Eventstream MQTT Connector

We now support MQTT v3 in the Eventstream MQTT Connector, making it compatible with widely used industry protocols.

With this update, the connector now supports MQTT v3.1 and v3.1.1, making it easier than ever to stream data from popular IoT platforms and MQTT brokers directly into Eventstream. Once ingested, the data can be immediately leveraged by Fabric Real-Time Intelligence for real-time analytics and alerting, enabling teams to detect patterns and act on IoT events as they happen.

By bridging MQTT v3 and Eventstream, teams can take full advantage of Eventstream’s scalability, reliability, and real-time processing capabilities—without modifying their existing broker setup. This update significantly lowers the barrier to adoption and empowers organizations to build robust, event-driven streaming pipelines with confidence.

For more details, visit Add MQTT source to an eventstream.

Real-Time Weather Connector for Eventstream (Generally Available)

Bring production-ready weather data streaming to your Fabric for real-time analytics. As part of this release, we’ve added several key enhancements based on early user feedback. You can now include a location name directly in the event payload, making processing, filtering, and analytics significantly easier. This removes the need for additional enrichment steps and simplifies stream processing logic.

We’ve also introduced a tenant-level control switch, allowing tenant admin to enable or disable the Weather Connector for the entire organization. This gives teams better governance, cost control, and operational clarity when managing Eventstream with weather data streams.

These features are now fully supported and ready for mission-critical use cases—from real-time analytics to alerting and dashboard.

We look forward to seeing your streaming workflow. For more information, refer to Add a real-time weather source to an eventstream.

Eventhouse accelerated OneLake shortcuts now support acceleration based on date-time columns

Accelerated OneLake shortcuts in Eventhouse indexes and caches data in OneLake, allowing performant queries on top of delta/Iceberg tables in OneLake. By default, the system uses the modificationTime in delta_log to determine the scope of data to accelerate.

We are adding a new property HotDateTimeColumn to the query acceleration policy to specify the name of datetime column in the Delta table whose values will be used to determine hot-cache eligibility. When set, data files whose rows have values within the configured Hot period (and/or HotWindows) are selected for caching. You can override the default behavior by changing the query acceleration policy.

For more information, refer to the .alter-merge query acceleration policy command documentation.

Eventhouse accelerated OneLake shortcuts control data freshness latency

MaxAge property in query acceleration policy controls the data freshness. The shortcut returns accelerated data if the last index refresh time is greater than @now – MaxAge. Otherwise, the shortcut operates in non-accelerated mode. The default for MaxAge is 5 minutes.

You can now override the default MaxAge at query execution. Users can fine-tune acceleration scope dynamically, balancing freshness and performance without altering policy definitions.

Example: This example overrides the MaxAge to 10 sec during querytime.

external_table( TableName, 10s)

If you use this property, the external table returns accelerated data if the last index refresh time is greater than @now – MaxAgeOverride. Minimum: 1s

For more information, refer to the .alter query acceleration policy command documentation.

Simplified KQL syntax for querying shortcuts in Eventhouse

Eventhouse shortcuts can now be queried just like Tables, eliminating the need for explicit external_table() syntax, making queries cleaner and more intuitive. This simplifies external data access by allowing direct querying of external tables using standard table names.

Example: sample 10 rows from shortcut T using the following syntax: T | take 10

Copilot support for querying shortcuts in Eventhouse

Copilot in KQL Queryset and Real-Time Dashboards can now generate KQL for shortcuts.

In Eventhouse, shortcuts are implemented as external tables. While querying shortcuts was not previously available with Copilot, this functionality is now supported, allowing users to query shortcuts in the same manner as native tables in Eventhouse.

Data Factory

More connectors supported for incremental copy in Copy job

We are expanding Copy job with broader multi‑cloud connectivity and stronger incremental copy support. Incremental copy in Copy job now supports additional connectors, including:

  • Google Big Query
  • Google Cloud Storage
  • DB2
  • ODBC
  • Fabric Lakehouse table
  • Folder
  • Azure files
  • SharePoint List
  • Amazon RDS for SQL Server
  • Amazon RDS for Oracle
  • Azure Data Explorer

To learn more, please refer to the Copy job in Data Factory documentation.

 

 

 

 

 

 

 

 

 

Related blog posts

Fabric January 2026 Feature Summary

February 12, 2026 by Anna Hoffman

Since SQL database in Microsoft Fabric became generally available in November, customer adoption has grown rapidly. Organizations are using it to simplify their data estates, eliminate ETL pipelines, and get their operational data ready for analytics and AI—without managing infrastructure. It’s a fully managed, SaaS-native transactional database built for what comes next. If you’ve been … Continue reading “SQL database in Fabric: Built for SaaS, Ready for AI”

February 10, 2026 by Ruixin Xu

Great technology does not succeed on design alone—it succeeds when it helps people solve real problems. Semantic Link is one of those transformative capabilities in Microsoft Fabric: it brings AI, BI, and data engineering together through a shared semantic layer, enabling teams to work faster and more intelligently on the data they already trust. From … Continue reading “Supercharge AI, BI, and data engineering with Semantic Link (Generally Available)”