Fabric June 2025 Feature Summary
Welcome to the June 2025 update.
The June 2025 Fabric update introduces several key enhancements across multiple areas. Power BI celebrates its 10th anniversary with a range of community events, contests, expert-led sessions, and special certification exam discounts. In Data Engineering, Fabric Notebooks now support integration with variable libraries in preview, empowering users to manage configuration values centrally for improved modularity and scalability.
Additional updates span Data Science, Data Warehouse, Real-Time Intelligence, and Data Factory, with new features such as upgraded AI functions, enhanced real-time data capabilities, and improvements to data ingestion and security. These updates collectively aim to streamline workflows, boost performance, and foster greater collaboration across teams.
Contents
- Notebooks integration with variable libraries (Preview)
- Notebook Copilot auto-completion (Preview)
- Notebook: T-SQL Notebook with monitoring improvement (Generally Available)
- Notebook: T-SQL and Python code against SQL DW (Preview)
- Notebook: Version history (Generally Available)
- Notebook creation dialog
- Materialized Lake Views
- Data Science
- Data Warehouse
- Real-Time Intelligence
- Ramp-up on real-time updates!
- Query Azure Monitor data in KQL Queryset
- Use Copilot to write queries for Real-Time Dashboards
- Eventstream SQL Operator
- Unlocking multi-schema support: Eventstream enhances data transformation flexibility
- Enhanced data ingestion: Eventstream’s streaming connector supports decoding data with Confluent Schema Registry
- Securely connect to Eventstream with Managed Private Endpoint (Generally Available)
- Data Factory
Events & Announcements
Power BI is turning 10! Come celebrate with us!
Over the past decade, Power BI has grown from an idea into a global community of millions, helping people everywhere turn data into action. We’re so grateful for your passion, ideas, and energy. You are what makes Power BI special.
Join us for the #PBI10 dataviz contest, expert-led sessions, discount certification exam vouchers for DP-700, DP-600 and PL-300, and more to mark this milestone. View the full schedule of events.
The Microsoft Fabric Community Conference is back x2! Join us in Vienna and Atlanta!
The Microsoft Fabric Community Conference is back for its third year! We are excited to announce that #FabCon is happening again, in Atlanta, Georgia! Mark your calendars for March 16-20, 2026.
Don’t miss out. Register here and use code MSCATL for a $200 discount on top of current Super Early Bird pricing!
Don’t want to wait until March? Join us at FabCon Vienna from September 15-18, 2025.
Register and use save €200 with code FABCOMM.
Data Engineering
Notebooks integration with variable libraries (Preview)
We are pleased to announce that Variable libraries have now been integrated with Fabric Notebooks and are available for preview. This new capability makes your Notebooks more modular, scalable, and environment-aware by letting you manage configuration values centrally—without modifying your code.
Why This Matters
Variable libraries allow you to avoid hardcoding values in your notebook code by updating the values in the library referenced within the notebook. This approach simplifies the reuse of code across teams and projects by utilizing a centrally managed library.
Using Variable Libraries in Notebooks
Fabric offers two simple methods to access values in Variable libraries. Following are examples using the new APIs available in Preview:
Access your Variable Library in code using getLibrary():

Example
Consider a scenario where you are reading a file from a Lakehouse, but the name of the Lakehouse differs between Development, Test, and Production environments. This is where Variable libraries can be beneficial:
Step 1: Define the variable in your Variable Library

Step 2: Retrieve the variable library from the Notebook

Step 3: Build your file path dynamically using the variable
file_path = f"abfss://<WorkspaceName>@onelake.dfs.fabric.microsoft.com/{VariableLib.Lakehouse_name}.Lakehouse/Files/<FileName>.csv"
df = spark.read.format("csv").option("header","true").load(file_path)
display(df)

Step 4: Code deployment
The Notebook code will reference the Lakehouse listed in the active value set.
Try Variable libraries in your Notebooks and share your feedback!
Notebook Copilot auto-completion (Preview)
Now in preview, Copilot Inline Code Completion (Preview) is an AI feature that assists data scientists and engineers in writing Python code more quickly and easily.
Inline code completion makes your workflow smoother by helping you:
- Write code faster with smart, context-aware suggestions.
- Reduce errors and improve overall code quality.
- Learn new libraries without breaking the flow.
- Stay focused by cutting down on documentation.
How to use it
Before using inline suggestions, enable Copilot completions in your notebook, using the toggle switch at the bottom of the notebook.

- As you type, the system references relevant content from your Notebook.
- It analyzes the preceding Python code to generate relevant completions.
- Suggested completions appear in light gray text; tab to accept or modify/reject as needed.
- The AI continues to learn and improve based on user patterns and feedback.

Inline Code Completion in Fabric Notebooks
The Copilot Inline Code Completion feature for Python is now accessible in Fabric Notebooks. This feature displays suggestions automatically based on the content of previous cells. To use this feature, enable it in your notebook and start coding. Suggestions will appear in light gray text as you type, and you can press tab to accept or modify them. These suggestions are generated from the information in preceding notebook cells.
To learn more, check out the blog post on: Improving productivity in Fabric Notebooks with Inline Code Completion (Preview)
Notebook: T-SQL Notebook with monitoring improvement (Generally Available)
The T-SQL Notebook public preview launched last September and we’ve seen a fantastic increase in usage. Today, we are excited to announce the GA for this feature. Besides the features we released during the public preview, we also enhanced the monitoring experience of the T-SQL notebook by updating the Recent run panel, adding a dedicated T-SQL panel to list the query history.

For each query, the following details are provided:
| Column name | Description |
| Distributed statement Id | Unique ID for each query |
| Query text | Text of the executed query (up to 8,000 characters) |
| Submit time (UTC) | Timestamp when the request arrived |
| Duration | Time it took for the query to execute |
| Status | Query status (Running, Succeeded, Failed, or Canceled) |
| Submitter | Name of the user or system that sent the query |
| Session Id | ID linking the query to a specific user session |
| Default warehouse | Name of the warehouse which accept the submitted query |
To learn more, explore the documentation.
Notebook: T-SQL and Python code against SQL DW (Preview)
Integrating T-SQL and Python in contemporary data workflows provides a robust and versatile methodology that leverages the advantages of both programming languages. In Microsoft Fabric Python notebooks, we are excited to announce the public preview of T-SQL magic command. This feature enables the execution of T-SQL code directly within Python notebooks, providing full syntax highlighting and code completion. Users can write T-SQL code in a Python notebook, and it will be executed similarly to a T-SQL cell.
Data engineers can use this feature to combine T-SQL with Python Notebooks. It allows running DDL/DML on Fabric Datawarehouse or SQL Database and read-only queries on Lakehouse SQL-endpoint. The result set of the query can be converted into a pandas data frame which you can apply further transformation with python code in the following code cell inside the same notebook.
This functionality is accessible for Fabric Python notebooks. It is necessary to configure the notebook’s language setting to Python.
To enable the T-SQL magic command in your Fabric notebook, you need to set the `%%tsql` magic command at the beginning of your cell. This tells the notebook that the code in that cell should be treated as T-SQL code.

In this example, we are using the T-SQL magic command to query a Fabric Data Warehouse, the `-artifact` parameter specifies the name of the Data Warehouse to use, the `-type` parameter specifies the type of the artifact, which in this case is a Warehouse, the ‘-bind’ parameter specifies the name of the variable to bind the results of the T-SQL query to. The results of the query are stored in a Python variable called `df1`. If you need to apply any transformation to the df1 variable, you can do so using Python code in the next cell. To specify the warehouse from a different workspace, you can use the `-workspace` parameter. Without this parameter, the notebook uses the current workspace
To see the full syntax, you can use the `%%tsql -? ` command. This displays helpful information for the T-SQL magic command, including the available parameters and their descriptions.
Notebook: Version history (Generally Available)
The Fabric notebook history feature is now generally available. This feature is intended to enhance your experience in developing and managing notebooks by offering built-in version control capabilities. To begin, access the History control located in the upper right section above the Notebook ribbon:

Review your Notebook’s recorded history or add manual checkpoints.
Highlights
- Automatic Checkpoints – These checkpoints are created automatically every 5 minutes based on the editing time, ensuring that your work is consistently saved and versioned.
- Manual Checkpoints – You can manually create checkpoints to record your development milestones, providing flexibility in how you manage your notebook versions.
- Track History of Changes – Users can now view a list of previous notebook versions, see what changes were made, contributed by whom, and when.
- Compare Different Versions – Easily compare different versions of a notebook through a diff view to understand the evolution of your work.
- Restore Previous Versions – If you make a mistake or want to explore a different approach, you can restore previous versions of your notebook or save a new copy of it.
For a more in-depth look at this feature, refer to the documentation on How to use Notebooks.
Notebook creation dialog
Creating a new notebook should be quick and easy—but flexible. Now instead of automatically generating a notebook with a default name in the selected workspace, we’ve made the process more user-friendly.
When you are creating a brand-new Notebook, with this improvement you’ll be asked to:
- Name your Notebook for easy identification later.
- Select the desired location for saving: You can designate the specific workspace and folder to store the Notebook.
After entering the name and selecting the location, the Notebook is created and available for use.

You can also pick a task in task flow for your new Notebook.

This small change helps keep your workspace tidy and makes collaboration smoother, build clear, organized Notebooks from the start.
Materialized Lake Views in (Preview)
Materialized Lake Views, an advanced feature in Microsoft Fabric designed to streamline the deployment and management of the medallion architecture in Lakehouse, is now available in preview across all regions. This feature allows users to define SQL-based Materialized Lake Views over raw or intermediate staged data, which are maintained automatically by the system, thereby eliminating the need for manual orchestration and dependency management. Additionally, this feature includes intuitive monitoring capabilities with visual maps of data lineage, pipeline run monitoring, immediate issue detection, and tracking of data quality trends. Furthermore, it supports alerting based on the defined data quality conditions.
Built on enhanced Spark SQL syntax, they make it easier for data engineers and analysts to build scalable, maintainable data layers with minimal operational overhead.
To learn more, refer to the Overview of Materialized Lake Views documentation.
Data Science
Upgrades to AI functions for better performance and lower costs
Since the initial release of AI functions, we’ve continued iterating on the feature in response to your feedback. The latest round of updates makes AI functions easier to use, more powerful, and more cost-effective.
- We’ve upgraded the default model that powers AI functions to GPT-4o-mini, enhancing the feature’s intelligence and effectively reducing its price. With new optimizations to the system prompts, you can expect better performance and more accurate results from each of the eight AI functions, all while saving Fabric capacity.
- The library that includes AI functions is now preinstalled on the Fabric 1.3 runtime. That means you can get started even faster than before, skipping the previously required installation steps. If you’d like to preview the most current version of AI functions, we keep the code to install the very latest library in the Transform and enrich data seamlessly with AI functions (Preview) documentation.
- If you don’t need the weight of Spark, you can now use AI functions on pandas DataFrames in pure Python Notebooks. The functionality is the same, letting you harness GenAI for data preparation with even more flexibility. Ensure that the necessary OpenAI and SynapseML dependencies are installed.
- The syntax for AI functions is so straightforward that you may not need our documentation for long. But Fabric Notebooks now include a simple interface that generates the code for you. Select a function, pick an active pandas or Spark DataFrame, enter the required inputs, and we’ll handle the rest.

Data Warehouse
Result set caching (Preview)
Result set caching is in preview for Data Warehouse and Lakehouse SQL analytics endpoint. When enabled, this feature improves the performance of repetitive SELECT SQL queries by retrieving the cached result of the query’s first run, rather than re-processing the original query from scratch. If you have a reporting scenario that frequently issues the same SELECT queries to your SQL Endpoint or Data Warehouse, you’ll want to enable Result Set Caching and give this performance optimization a try!
To learn more, refer to the Result set caching (Preview) documentation.
Clarification: During preview, Result Set Caching is opt-in and must be enabled by user on their artifact(s). Once enabled, it applies automatically whenever possible.
Real-Time Intelligence
Ramp-up on real-time updates!
Register for a free, hands-on instructor-led Real-Time Intelligence in a Day workshop covering the capabilities of Real-Time Intelligence. This is an intermediate-level training course designed for users across all roles.
Query Azure Monitor data in KQL Queryset
We’re excited to share a major usability improvement in KQL Queryset in Microsoft Fabric: it’s now simpler than ever to query Azure Monitor data.
While support for Azure Monitor resources has been available behind the scenes, it previously required manually configuring a connection string. With the latest update, we’ve made it easy and accessible to everyone.

You can now connect to Azure Monitor data directly from KQL Queryset using a built-in connection string builder. This means that there is no need to preconfigure a connection—just provide the Azure resource parameters (like subscription, resource group, and workspace), and we’ll handle the rest.

KQL Queryset continues to let you run queries in the context of any supported data source. Now, observability of data becomes immediately actionable.
Also worth noting that we are introducing a new, friendly and simplified UI for connecting the first data source to a newly created KQL Queryset:

To learn more, refer to the Query data in a KQL Queryset documentation.
Use Copilot to write queries for Real-Time Dashboards
A new method to build insights into Real-Time Dashboards has been introduced. Users can now use Copilot to write KQL queries directly within Real-Time Dashboards.
This new capability brings the Copilot assistant pane into the tile editing experience, allowing you to ask questions about your data in natural language—and receive a working Kusto Query Language (KQL) query in response.

Using Copilot in Real-Time Dashboard you can:
- Ask questions in everyday language (e.g., ‘Show me the top 5 services by error rate in the last 30 minutes’).
- Automatically generate KQL queries that power your dashboard tiles.
- Replace the KQL queries in the tile with the generated KQL.
To learn more, refer to the following documentation on how to Create a Real-Time Dashboard and Access the Real-Time Intelligence Copilot.
Eventstream SQL Operator
Eventstream in Microsoft Fabric features no-code data transformation options for real-time processing, including filter, managed fields, aggregate, group by, join, union, and expand. These operators enable robust transformations via a drag-and-drop interface.
Users can create simple and complex data processing rules using these operators. There are situations where built-in operators may not be sufficient, requiring detailed control to design custom data processing rules.
Introducing the SQL Operator in Fabric Eventstream!
The SQL Operator integrated within Evenstreams significantly enhances the existing transformation capabilities of Eventstream. It allows users to define custom data transformation rules, enabling the management of complex transformation scenarios with familiar SQL syntax. For intricate data processing tasks, the SQL operator can be added to your eventstream, consolidating all data transformation logic in one central location. With the SQL editor and test query functionalities, users have the freedom to define their own logic efficiently.
Example: To add SQL operator to the Eventstream, go to Transform events button and select SQL operator.
Eventstream SQL Operator Key Features
- Define the custom data transformation logic with familiar SQL syntax.
- Test query results in real-time data.
- Code intelliSense and code auto complete.
- Output transformed data to any Real-Time Intelligence destination.
Unlocking multi-schema support: Eventstream enhances data transformation flexibility
Eventstream now includes support for multiple schemas. This feature allows you to infer multiple schemas from various sources and Eventstream itself, providing the ability to design different data transformation paths with flexibility. The support for multiple schemas in Eventstream enables the following scenarios, which aim to address customers’ needs:
- View and update the inferred schema(s): The inferred schema(s) within an Eventstream can be reviewed and verified in multiple locations. If any data types in specific fields are incorrectly inferred, this feature allows for necessary corrections.
- Leverage various inferred schemas for diverse transformation paths: When configuring the first operator node after the middle default stream, it is necessary to select one of the inferred schemas. This allows the transformation path to be designed with event columns from the chosen schema. Different transformation paths can use different schemas for data transformation within a single Eventstream, increasing flexibility in data transformation.
- Well-organized data preview and test results: Multiple schema support allows for a well-organized display of previewed data and test results. Previously, data with multiple schemas were shown with mixed columns during data previewing or test results, leading to confusion. Now, an inferred schema can be selected to filter the previewed or testing data, ensuring that only the data that matches the selected schema is displayed in the data preview or test results tab.
- Map schema to source: When inferring multiple schemas, Eventstream assists in mapping the schema to the source, ensuring that each schema is associated with a known source. If Eventstream cannot identify the source of data with the inferred schema, you will be prompted to manually map the schema to an appropriate source, ensuring that each schema has an associated source for transformation design. It provides visibility from where the schema originates from.

To learn more, refer to the Eventstream Multiple Schema – overview documentation.
Enhanced data ingestion: Eventstream’s streaming connector supports decoding data with Confluent Schema Registry
Eventstream’s Confluent Cloud for Apache Kafka streaming connector is capable of decoding data produced with Confluent serializer and its Schema Registry from Confluent Cloud. The Confluent Schema Registry serves as a centralized service for managing and validating schemas used in Kafka topics, ensuring that producers and consumers of Kafka messages adhere to a consistent data structure. Each message produced with the Confluent serializer, and its Schema Registry is serialized in a specific format that requires retrieving the schema from the Confluent Schema Registry to be decoded. Consequently, without accessing this schema, the data flowing into Eventstream cannot be previewed, processed, or routed to the desired destinations.
To use this feature, simply input a few parameters in the advanced settings of the Confluent Cloud Kafka source to establish a connection with your Confluent Schema Registry server. Once configured, the data fetched into Eventstream from Confluent Kafka will be decoded by the streaming connector, allowing it to be previewed, processed, and routed to the appropriate destinations according to your business requirements.

To learn more, refer to the Add Confluent Cloud for Apache Kafka source to an eventstream documentation.
Securely connect to Eventstream with Managed Private Endpoint (Generally Available)
Private Endpoints (MPE) support in Fabric Eventstream is now generally available. With this release, it is now possible to securely connect Azure Event Hubs and Azure IoT Hub to Eventstream using managed private endpoints in production environments.
This feature enables Eventstream to pull data from Azure services that are behind a firewall or not publicly accessible, ensuring the data ingestion happens over a private network. It helps you meet strict security and compliance requirements by keeping your data streaming and processing within a trusted boundary.
Updates in GA:
- Production-ready – Managed Private Endpoints are now fully supported in Eventstream for secure and reliable streaming in enterprise scenarios.
- Expanded region availability – Managed private endpoints for Fabric Eventstream are now available in the following regions.

- Improved UI Indicators: Once an Azure source is securely connected via a managed private endpoint, Eventstream now displays an icon confirming the secure connection.

Get Started
Creating a managed private endpoint is easy—just go to Workspace settings, navigate to Network security, and set up an MPE to your Azure Event Hub or IoT Hub. Approve the Private endpoint connection in Azure, and you’re ready to stream data securely and privately into Eventstream.
For step-by-step instructions, refer to the Connect to Azure resources securely using managed private endpoints documentation.
Data Factory
Azure Data Factory item in Microsoft Fabric (Generally Available)
The General Availability (GA) of the Azure Data Factory (Mounting) feature in Microsoft Fabric has been released. This feature allows customers to bring their existing Azure Data Factory (ADF) pipelines into Fabric workspaces seamlessly, without the need for manual rebuilding or migration.
What’s new with GA
- Support for Git-enabled ADF factories – It is now possible to mount pipelines in ADF source-controlled environments to Fabric.


- Mount your data factory from ADF UX – Mount your data factory to Fabric directly from ADF.

With the general availability of the Azure Data Factory item, you can mount your factory within seconds to manage ADF factories inside Microsoft Fabric workspaces.
To learn more, refer to the Bring Azure Data Factory to Fabric documentation. documentation.
Closing
We hope that you enjoy the update! Be sure to join the conversation in the Fabric Community and check out the Fabric documentation to get deeper into the technical details. As always, keep voting on Ideas to help us determine what to build next. We are looking forward to hearing from you!