Fabric April 2025 Feature Summary
Welcome to the Fabric April 2025 Feature Summary! This update brings exciting advancements across various workloads, including Low-code AI tools to accelerate productivity in notebooks (Preview), session Scoped distributed #temp table in Fabric Data Warehouse (Generally Available) and the Migration assistant for Fabric Data Warehouse (Preview) to simplify your migration experience.
Contents
-
- Community & Events
- General
- Data Science
- Data Warehouse
- ALTER Table Drop Column and sp_rename column support in Fabric Warehouse (Generally Available)
- Session Scoped distributed #temp table in Fabric Data Warehouse (Generally Available)
- Migration assistant for Fabric Data Warehouse (Preview)
- OPENROWSET function (Generally Available)
- BULK INSERT statement (Generally Available)
- Real-Time Intelligence
- Fabric special for Kusto Detective Agency: solve the Digibus real-time crisis
- Azure monitor data sources are now fully integrated with KQL Queryset
- Improvements to Data Exploration (low-code) experience
- Eventhouse system and KQL Database overview: in-item monitoring enhancements
- Eventstream’s Real-time Weather Connector
- Databases
- Data Factory
- Closing
Community & Events
Get certified in Fabric – for FREE.
As part of the Microsoft AI Skills Fest, Microsoft is celebrating 50 years of innovation by giving away 50,000 FREE Microsoft Certification exam vouchers in weekly prize drawings. Enter the sweepstakes now to have the most chances to win a free exam voucher for DP-600 or DP-700.
Free live learning sessions for Data Engineers
Whether you’re new to Microsoft Fabric or building on your existing skills, these sessions, hosted by Microsoft Fabric experts, give you the knowledge and confidence to get certified and take your data engineering career to the next level. Register now – live sessions in English start April 30th. Available on-demand in Spanish and Portuguese.
General
Fabric Copilot and AI Capabilities available on all paid SKUs
We are thrilled to announce a major update in Microsoft Fabric! Starting today, the SKU requirement for Copilot and AI features will be lowered to F2, making it much more accessible for you to explore, test, and utilize Fabric AI capabilities. This exciting change grants you full access to Copilot in Fabric, Fabric data agents, and Fabric AI Functions—all designed to enhance productivity, uncover insights swiftly, and seamlessly enrich your data.
With this update, more teams will have the opportunity to experiment with AI-driven workflows within their existing capacity. It’s worth noting that while the smallest SKUs provide full feature access, they will support a more limited number of AI requests due to their smaller capacity size. Nonetheless, this will allow more users to experience the benefits of AI capabilities and improve their workflows.
We look forward to seeing the creative ways your team will utilize these AI capabilities to boost your projects and productivity!
Data Science
Low-code AI tools to accelerate productivity in notebooks (Preview)
Fabric notebooks allow you to accelerate your productivity with native AI capabilities like Copilot and AI functions. A new notebook tab devoted to AI and ML tools now provides low-code shortcuts for transforming data with Data Wrangler, training custom models with AutoML, chatting with Copilot, and more.
Among the updates is a low-code interface to apply AI functions for seamless LLM-powered data enrichment. Just select one of the functions, choose an input pandas or Spark DataFrame and a target column to transform, and fill in any required parameters. Fabric will produce the code for you.
To learn more, refer to the Transform and enrich data seamlessly with AI functions documentation.
Low-code AI capabilities in Data Wrangler (Preview)
All Fabric notebook users have access to Data Wrangler, a low-code tool with an immersive interface for exploring and transforming pandas or Spark DataFrames. Data Wrangler provides a library of common data-cleaning operations that you can browse and apply seamlessly getting real-time previews and generating reusable code.
We have new AI-powered capabilities coming to Data Wrangler later this month:
- Automated suggestions with rule-based AI: A new set of automated suggestions will analyze your data and use rule-based AI from the Microsoft PROSE team to highlight the most relevant Data Wrangler operations for you.
- Convert natural language to code with Copilot: Need an operation that you don’t see in Data Wrangler? You can now use Copilot to generate custom code. As with any Data Wrangler operation, you’ll get a preview before applying or discarding it.
- Use AI to translate custom code from pandas to PySpark: Data Wrangler automatically converts Spark DataFrames to pandas for performance reasons, then translates your applied code back to PySpark when you export it. With GenAI in Data Wrangler, custom code operations will also be translated to PySpark—whether you type them in yourself or generate them with Copilot.
Data Warehouse
ALTER Table Drop Column and sp_rename column support in Fabric Warehouse (Generally Available)
There are two powerful new features in Fabric Warehouse that we are happy to introduce: ALTER TABLE DROP COLUMN and SP_RENAME COLUMN.
- ALTER TABLE DROP COLUMN effortlessly removes unnecessary columns to streamline storage, boost performance, and improve query efficiency.
- Cloning a table as of a point in time & time travel to a point in time that is before the table was dropped is not supported.
- Dropping columns from Lakehouse tables is not a supported scenario.
- SP_RENAME COLUMN easily renames columns without downtime, making schema adjustments faster and reducing the risk of errors.Columns and Tables are not renamable in Lakehouse.
These new features make it easier to maintain a clean and efficient data model, allowing your team to quickly adapt to evolving business needs with minimal disruption.
Check sp_rename Microsoft learn and ALTER TABLE (Transact-SQL) – SQL Server for additional details and syntax.
Session Scoped distributed #temp table in Fabric Data Warehouse (Generally Available)
Are you unable to manage intermediate query results efficiently in your batch jobs? Fabric Data Warehouse users can now create session-scoped #temp tables to handle these results seamlessly. These temp tables can be backed by either Parquet (distributed) or mdf (non-distributed), offering flexible options to cater to different needs.
Users can create two types of #temp tables:
- Non-Distributed Temp Tables (mdf-backed) – These are created using syntax like user tables in Fabric DW, with the key difference being the need to prefix the table name with ‘#’.
CREATE TABLE #table_name ( Col1 data_type1, Col2 data_type2 );
- Distributed Temp Tables (parquet-backed) – These tables are distributed and created using the following syntax:
CREATE TABLE #table_name ( Col1 data_type1, Col2 data_type2 ) WITH (DISTRIBUTION=ROUND_ROBIN);
Note: data_type1 and data_type2 are placeholders for the supported data types in Fabric Data Warehouse Data types.
For additional details on why we offer two types of temp tables, the scenarios they support, and their limitations, refer to the Session-scoped distributed #temp tables in Fabric Data Warehouse documentation.
Start leveraging session-scoped temp tables in Fabric Warehouse to streamline your data processing tasks and enhance your workflow efficiency. Happy querying!
Migration assistant for Fabric Data Warehouse (Preview)
The Migration Assistant for Fabric Data Warehouse is now in preview. The migration experience is built natively into Fabric and enables Azure Synapse Analytics (Data Warehouse) customers to transition seamlessly to Microsoft Fabric. This new DW migration experience allows users to easily migrate both metadata and data from the source database, automatically converting the source schema to Fabric Data Warehouse, helping with data migration, and providing AI powered assistance. With integrated assessment tools and guided support, this capability simplifies migration, enabling customers to leverage Fabric’s capabilities without the complexity of traditional migrations.
The Migration Assistant for Fabric Warehouse streamlines the migration process into four steps:
- Metadata migration
- Problem resolution
- Data copying
- Connection rerouting
Each of these steps are explored in detail in this Migration Assistant for Fabric Data Warehouse (Preview) blog post.
For a more comprehensive guide, you can also review the migration assistant how-to article for step-by-step instructions and the Fabric Migration Assistant documentation for more in-depth information.
OPENROWSET function (Generally Available)
The OPENROWSET function in Fabric Data Warehouse and Fabric SQL endpoint are generally available. The OPENROWSET function enables you to seamlessly read Parquet and CSV files stored in Azure Data Lake Storage and Azure Blob Storage, as it is shown in the following example:
SELECT TOP 10 * FROM OPENROWSET( BULK 'https://<storage>.blob.core.windows.net/container/file.parquet' )
With OPENROWSET, you can easily browse files before loading them into the Fabric Data Warehouse, allowing you to inspect the schema before creating the target table. This function provides several valuable features that significantly enhance the data ingestion experience:
- Referencing custom folder structures – the OPENROWSET function can reference URI patterns by using * (wildcard) and /** (recursive child wildcard) that can match multiple source files that match the same pattern or return all files that are recursively placed under the URI.
- Reading partitioned data sets – the OPENROWSET function can retrieve partition values from the folder names, which is crucial if you are reading data from hive-style partition structures and return these values in the result set.
- Reading Parquet complex types – the OPENROWSET function supports complex types such as struct, array, and map, returning them as JSON text for easier manipulation and analysis.
- Customizing result set scheme – the OPENROWSET function allows you to map the result set columns to the source columns and define the optimal column types for all columns using the WITH clause, providing flexibility in how data is presented and utilized.
- OPENROWSET supports most of the options available in SQL Server, Azure SQL, and Synapse, facilitating seamless migration and code reuse between these platforms.
- Ingesting data with CTAS or INSERT SELECT statements – the OPENROWSET function enables you to ingest data using Create Table As Select (CTAS) or INSERT SELECT statements by using the OPENROWSET as a source and allowing you to modify the values from the source values at ingestion time. This is crucial in scenario where you need to modify source data that is not in expected format.
The OPENROWSET function will significantly improve your data ingestion experience by enabling you to browse files, transform data during ingestion, and facilitate easier migrations from Synapse, SQL Server, and Azure SQL Database to Fabric Data Warehouse.
This powerful functionality ensures that you can handle complex data types, manage partitioned data efficiently, and customize your result set schema to meet your specific requirements, all while maintaining compatibility with existing SQL options.
BULK INSERT statement (Generally Available)
The BULK INSERT statement in Fabric Data Warehouse is generally available, it enables you to ingest data into a table from the specified file path:
BULK INSERT table_name FROM file_url_path
The BULK INSERT statement is very similar to the COPY INTO statement and enables you to load data from external storage. The key value of BULK INSERT is that it supports traditional SQL Server and Azure SQL syntax, thus facilitating an easy migration of SQL Server databases to the Fabric Data Warehouse without the need for code changes.
Additionally, BULK INSERT supports several traditional options used in SQL Server, such as text/xml format files that are used by bcp tool and importing non-Unicode files with the custom code pages. This compatibility ensures that you can migrate your databases to the Fabric Data Warehouse with minimal code changes to your ingestion code while retaining your existing ingestion logic without altering the input files.
By leveraging the BULK INSERT statement, you can maintain your data ingestion workflows and schemas, ensuring a seamless transition to the Fabric Data Warehouse. This feature not only preserves the integrity and structure of the data but also enhances the efficiency of the migration process, reducing the potential for errors and downtime. As a result, businesses can continue to operate smoothly while taking advantage of the advanced capabilities and scalability offered by the Fabric Data Warehouse.
Real-Time Intelligence
OpenAI plugins for Eventhouse
Two new OpenAI plugins are available to generate embeddings and leverage the power of OpenAI models within the Eventhouse context. You can use the plugins to build Retrieval Augmented Graph (RAG) applications or augment your data analysis with OpenAI models.
ai_embed_text : Integrates OpenAI embedding models to generate embeddings within KQL.
ai_chat_completion: Leverages the power of ChatGPT and other OpenAI models to augment data analysis within the Eventhouse context.
Fabric special for Kusto Detective Agency: solve the Digibus real-time crisis
This challenge is specifically optimized for onboarding to the RTI platform. You can use it to enhance onboarding for customers and create CTF (Capture the Flag) tournaments to learn with hands-on experience. During this quest, you’ll learn KQL and utilize powerful tools including:
- Eventstream: Process and transform real-time data
- Eventhouse: Store and query massive datasets
- Real-Time Dashboard: Visualize critical metrics
- Activator: Trigger automated responses
Put your detective skills to work helping the Digibus Digitown transit company solve its crisis. Follow the clues, analyze the data, and uncover the solution with the power of Real-Time Intelligence.
Lucky finishers will win prizes! Don’t miss this opportunity to enhance your RTI skills while solving an engaging mystery.
Get started with Kusto Detective Agency.
Azure monitor data sources are now fully integrated with KQL Queryset
KQL Queryset has always supported cross-service queries, but now we’re making Azure data sources even more accessible. Application Insights (AI) and Log Analytics (LA) are now first-class citizens, just like Eventhouse and ADX, providing a more seamless and intuitive experience.
With this update, you can:
- Run cross-service queries between Log Analytics, Application Insights, Eventhouses, and Azure Data Explorer (ADX) native clusters, all connected to the same KQL Queryset.
- Directly query your Log Analytics workspace or Application Insights resources from KQL Queryset.
This makes it easier than ever to explore and analyze data across services without extra configuration.
Improvements to Data Exploration (low-code) experience
We’re continuing to improve the low-code data exploration experience, making it even easier to analyze data in Real-Time Dashboard tiles and KQL Database tables (in Eventhouse and Real-Time Hub).
Here’s what’s new:
Hierarchal Columns Pane – easy to use summary of key data characteristics to guide exploration and data manipulation includes:
- List of participating columns
- Data types
- Statistical info (avg, min, max, cardinality etc.)
Focus mode – work more efficiently by focusing on either the data visualization or results grid.
Right-click actions on grid – quickly copy, export, and perform actions directly from the data grid.
Filter builder with OR conditions –create WHERE statements with more flexible filtering options.
Datetime picker control – select exact or relative dates effortlessly when filtering datetime columns.
Learn more about data exploration experience here Explore data in Real-Time Dashboard tiles.
Eventhouse system and KQL Database overview: in-item monitoring enhancements
As part of our ongoing commitment to improving system visibility, performance monitoring, and user experience, we’ve introduced several enhancements across both the Eventhouse System Overview and KQL Database in-item monitoring pages.
Eventhouse system overview enhancements
New – Eventhouse ingested rows overtime
One of the improvements to the Eventhouse System Overview is the ability to view ingested row metrics directly within the interface. Users can now see the number of rows ingested into each database, offering immediate visibility into data volume and ingestion activity. The feature includes time-based filtering, enabling users to analyze ingestion trends over specific periods.
New tab for Top ingested databases
Another significant update is the introduction of a dedicated tab for Top Ingested Databases. This new section offers a detailed view of ingestion metrics for each database, including the total number of ingested rows and any ingestion failures (currently only partial failures are reported). The addition of time-based filtering makes it easier to identify patterns and anomalies across specific timeframes.
Top 10 ingested databases- improvements
The new multi-tab interface displays both the most queried databases and the top ingested databases, improving navigability and offering a clearer view of different performance indicators. Users can also benefit from added cache miss rates that were added to the queries database metrics.
Eventhouse details moved to the menu bar
To streamline access and maintain design consistency, Eventhouse details have been relocated to the main menu bar. Selecting this option opens a side panel that mirrors the familiar layout used in the Database and Table overview panes.
To learn more, refer to the Manage and monitor an Eventhouse documentation.
Eventstream’s Real-time Weather Connector
Last month, we introduced several powerful new connectors for Eventstream in Fabric Real-Time Intelligence. Now, we’re taking it a step further with a hands-on video that shows you how to use one of them: the Real-time Weather connector.
In the demo video, we walk through how to easily add the Weather connector to an Eventstream and start streaming live weather data—like temperature, humidity, and wind speed—into Fabric. Whether you’re building real-time dashboards, alert systems, or enriching other streams, weather data adds valuable real-world context to your applications.
What you’ll see in the video:
- A quick overview of the new Eventstream connectors and where to find them
- Step-by-step demo on adding and configuring the Weather connector
- Live preview of streaming weather data in action
This video is a great starting point if you’re exploring how live data sources can enrich your streaming solutions in Fabric. The Weather connector is especially useful for industries like logistics, agriculture, and retail—anywhere environmental conditions influence operations. Watch the demo to see how easy it is to add real-time weather feed into an Eventstream.
Can’t find your data sources, let us know! Send us an email at askeventstreams@microsoft.com or fill out our survey.
h2>
Databases
SQL database in Fabric
We have several new advances to share in SQL Database within Fabric. Continuous innovation is at the heart of our development, outlined are several key enhancements.
New regions supported
- Australia Southeast
- Italy North
- Japan East
- Poland Central
- WestUS3 are new regions that supports SQL databases in Fabric workloads
Backup billing
SQL database in Microsoft Fabric offers automatic backups from the moment of database creation, ensuring data protection and recovery. The system makes full backups every week, differential backups every 12 hours and transaction log backups every 10 minutes, providing point-in-time restore capability up to 7 days. While compute and data storage are already included in the Fabric capacity billing model, starting April 1, 2025, backup storage will also be billed. Customers will only be billed for backup storage that exceeds the allocated database size.
To learn more, check out the Automatic backups in SQL database in Microsoft Fabric documentation.
Performance dashboard
The SQL in Fabric dashboard now shows the lead blocking query to allow developers to quickly identify SQL queries that are blocking and impacting other queries thereby disrupting their operational workloads.
Terraform support, Rest API & CLI support
This capability enables customers to automate, scale, integrate, and govern their SQL databases within Microsoft Fabric, using a declarative approach with Terraform. HashiCorp Terraform, an open-source tool that offers a secure, predictable, and consistent method for deploying and managing infrastructure across multiple cloud environments. This functionality extends the capabilities of Fabric through Infrastructure-as-Code (IaC).
To learn more about fabric-terraform-quickstart refer to the documentation. Be sure to check out the blog post Terraform support for Fabric GA for a more information.
Integrations
Fabric data pipelines will now support Fabric SQL database as a data source for Stored Procedure and Script activities, allowing users to just pick a database rather than having to enter connection information.
Graph database support
The query editor in SQL database in Fabric now have T-SQL support for graph databases. This feature enables the modeling of many-to-many relationships. You can create a graph database with nodes and edges and utilize the new MATCH clause to identify patterns and navigate through the graph.
Learn more about how to Create a graph database and run some pattern matching queries using T-SQL from our documentation.
Data Factory
Mirroring
Mirroring for Snowflake protected by a firewall (Preview)
You now can mirror Snowflake protected by a firewall. Using either the VNet data gateway or the on-premises data gateway for mirroring is available. The data gateway facilitates secure connections to your source databases through a private endpoint or from a specific private network.
Learn more about Mirroring for Snowflake from the Microsoft Fabric Mirrored Databases from Snowflake documentation.
Closing
We hope that you enjoy the update! Be sure to join the conversation in the Fabric Community and check out the Fabric documentation to get deeper into the technical details. As always, keep voting on Ideas to help us determine what to build next. We are looking forward to hearing from you!