Microsoft Fabric Updates Blog

Today kicks off Microsoft Build and we have a lot of new features in store for you. Some highlights are the Fabric Roadmap tool, a way to get glimpse of that is coming soon to Fabric. Chat with your data, powerful AI capabilities that make Power BI even easier. Cosmos DB in Fabric, give you the power of Cosmos DB that’s AI-ready.

To get a taste of the Build excitement be sure to check out Arun Ulag’s Arun Ulag’s announcement blog and Kim Manis’ announcement blog.

Contents

Events & Announcements

New Fabric Roadmap tool

We’ve heard from you that it’s critical to know when key Fabric features will land, especially those that directly impact your use cases or unblock your organization’s adoption. For example, if you’re waiting on Private Link support for Workspaces due to internal security requirements, you need a clear view of when that capability is planned and when it becomes available.

Until now, this information was spread across Release Plan documentation pages. Today, we’re making that experience better. The new Fabric Roadmap page brings it all together in one place, with a cleaner interface, real-time updates, and direct integration with the internal planning tool used by the Fabric team. Check it out at https://roadmap.fabric.microsoft.com and tell us what you think in the comments.

Power BI

Some of the highlights include Chat with your data, a revolutionary new way to use AI in PowerBI. And Translytical task flows, enabling users to automate action directly within the report—streamlining decision-making and operational follow-through.

To learn about all of the latest updates to Power Bi head over to the Power BI May 2025 Feature Summary

Fabric Platform

Additional REST APIs for Fabric Deployment pipelines

An additional batch of Fabric public APIs for Deployment pipelines have been released, following our initial release of Deploy APIs a few months ago. With this new release, the full list of available Fabric APIs now matches the APIs available in Power BI, excluding Admin APIs, which will be added later. This marks a significant milestone in our ongoing efforts to enhance the Fabric platform and provide our users with powerful tools to manage their deployment processes more efficiently.

Overview of the new APIs

The new APIs offer a range of functionalities that streamline the deployment process, making it easier for teams to manage their content across different environments:

  • Pipeline management: Create, update, and delete deployment pipelines with the new APIs.
  • Stage management: Get and update the deployment pipeline stages.
  • Deployments management: List deployment pipeline operations and get details of specific deployment pipeline operations.
  • Workspace assignment management: Assign and unassign workspaces to and from stages.
  • Roles assignment management: List deployment pipeline role assignments and get or delete specific role assignments.

Support for Service Principal (SPN)

All fabric Deployment pipelines REST APIs are now having the support for Service Principal (SP). This allows for more secure and automated deployments, enabling teams to integrate Fabric into their existing DevOps workflows seamlessly.

Getting started

To start using the new Fabric public APIs for Deployment pipelines, you can refer to the Automate your deployment pipeline with Fabric APIs documentation. This provides comprehensive guides and examples to help you integrate these APIs into your deployment processes effectively.

New capabilities for Fabric Git integration

Service Principal (SPN) support for Azure DevOps

A few weeks ago, we announced the capability to use Service Principal when working with Fabric Git API and your Git Provider is GitHub. Soon, we will support Azure DevOps as your Git Provider as well.

Cross-Tenant support for Azure DevOps

Previously, connecting your workspaces using your identity to an Azure DevOps repository required both Fabric and your Azure DevOps organization to reside within the same tenant. However, we’re thrilled to announce that this limitation will soon be a thing of the past. With our upcoming update, you’ll be able to connect to an Azure DevOps organization even if it belongs to a different tenant than your Fabric tenant.

Shortcut transformations (Preview)

The preview of shortcut transformations introduces the ability to transform data as it’s shortcut into Fabric including changing the data format into Delta tables or applying AI transformations to unstructured data—such as summarizing text, translating content, or classifying documents.

Data Engineering

New regions supported in User Data Functions

After our preview launch, we have been working on increasing the number of regions supported for this feature. We have recently added the following 14 new regions where you can use this feature from:

  • Australia Southeast
  • Brazil South
  • Canada Central
  • Central India
  • France Central
  • Korea Central
  • North Central US
  • Norway East
  • South Africa North
  • South India
  • UAE North
  • UK West
  • West Europe
  • West US

You can find the entire list of supported regions in this article: Fabric Region Availability. This article will be frequently updated to reflect the latest region support.

SPN support for User data functions

Fabric User data functions now support Service Principal Names (SPN) to run functions. This feature allows organizations to ensure compatibility with enterprise identity and access management systems. By using SPNs, it is possible to implement applications that can call a user data function without requiring user credentials. This aligns with the zero-trust security model, providing a secure way where user data functions are the glue between your application and your data in Fabric.

To learn more, refer to the SPN support for user data functions documentation.

SPN support for the Livy API

The Fabric Livy API for Data Engineering now supports Service Principal Names (SPN) to submit and execute Spark code. This added SPN authentication method allows organizations to ensure compatibility with enterprise identity and access management systems. By using SPNs, it is possible to implement applications that can call a user data function without requiring user credentials. This aligns with the zero-trust security model, providing a secure way where user data functions are the glue between your application and your data in Fabric.

To learn more, refer to the Create and run Spark Session jobs using the Livy API documentation.

Private libraries support for User data functions

A new feature has been introduced: Private libraries support for Fabric user data functions. These private libraries are code created by you or your organization. Data engineering can be challenging, especially with data quality and complex analytics. Private libraries help streamline work and enable proprietary code use within a team securely. Fabric User data functions now allow custom library uploads in .whl format, containing scripts or modules for internal business logic. This can improve developer productivity across your organization allowing you to reuse these libraries for automating various process across different teams or departments in your organization.

To learn more refer to the documentation on How to manage libraries for your Fabric User Data Functions.

Data Science

Copilot in Power BI now supports Fabric data agents

Fabric data agents can be used in the new chat with your data experience in Power BI to get answers to your questions and explore your data more effectively. This integration enables users in Copilot in Power BI to not only connect Power BI semantic models, but also to a wider range of data sources in OneLake—such as lakehouses, warehouses, and KQL databases—retrieving insights seamlessly through Fabric data agents.

When you ask a question in the new full-screen Copilot in Power BI experience Copilot first searches for relevant Fabric data agents you have access to. If you have the necessary permissions, it uses those data agents to retrieve answers based on your access rights. This helps you discover content, ask questions, perform quick analyses, and refine insights—all without switching tools or leaving Copilot. You can also manually add a data agent to your Copilot session and chat with it directly from Copilot in Power BI, enabling seamless access to your OneLake data.

A screenshot of a computer

AI-generated content may be incorrect.

Fabric Data Agent Integration with Microsoft Copilot Studio (Preview)

Fabric data agent will be available in preview and can be added as an agent to your custom setup in Microsoft Copilot Studio. With this integration, your custom agent can access data stored in Microsoft OneLake—including lakehouses, warehouses, Power BI semantic models, and KQL databases—and retrieve insights seamlessly through the Fabric data agent.

Once you add the Fabric data agent to your custom agent, you can publish your custom agent to various consumption channels, including Microsoft Teams and Microsoft 365 Copilot, and share it with specific users or your entire organization.

When a user asks a question from the custom agent in any of these channels, the Fabric data agent is used to retrieve answers—provided the user has the necessary permissions.

Responses are always scoped to the user’s access rights, making it easier to discover relevant content, perform quick analyses, and refine insights within the same channel.

To extend functionality, you can define actions for your custom agent. Actions such as sending emails or initiating other tasks allow the agent to automate processes on behalf of users, helping streamline workflows and improve productivity without leaving the custom agent experience.

A screenshot of a computer

AI-generated content may be incorrect.

Data Warehouse

Warehouse Snapshots (Preview)

Ensuring data consistency during ETL (Extract, Transform, Load) processes has long been a challenge for data engineers. We are pleased to announce the preview of Warehouse Snapshots, a new feature in Microsoft Fabric designed to offer a stable, read-only view of your data warehouse at a specific point in time. This capability facilitates uninterrupted analytics and reporting.

A warehouse snapshot is a read-only representation of a data warehouse at a designated moment, retained for up to 30 days (until configurable retention is available). Warehouse snapshots can be seamlessly ‘rolled forward’ on demand, enabling consumers to connect to the same snapshot (or use a consistent warehouse connection string from third-party tools) to access a curated version of data. This ensures that data engineers can provide analytical users with a consistent dataset, even as real-time updates occur. Analysts can run SELECT queries based on the snapshot without any ETL interference.

For more information on CRUD for warehouse snapshots and understanding their considerations and limitations, please refer to Warehouse Snapshot in Microsoft Fabric (Preview).

Real-Time Intelligence

Call of the Cyber Duty: a new season of Kusto Detective Agency begins

Are you ready to put your sleuthing skills to the test? The Kusto Detective Agency is back – and this time, it’s bigger, bolder, and packed with adrenaline.

A screen shot of a computer

AI-generated content may be incorrect.

Introducing ‘Call of the Cyber Duty’, a brand-new season of the Kusto Detective Agency challenge designed for the sharpest minds in data. Whether you’re a seasoned Kusto veteran or a curious newcomer, this is your chance to dive into a thrilling online race where speed, smarts, and strategy collide.

  • Challenge begins June 8, 2025
  • Register by June 7, 2025

Why should you care?
Because this isn’t just a challenge – it’s a competition. And the stakes, monumental.

  • $10,000 for 1st place
  • Bragging rights across the Fabric community
  • Team up or go solo – form a squad of up to six detectives or take on the mission alone.

Who should join?

If you’re using Microsoft Fabric Real-Time Intelligence and working with Eventhouse, this is your moment. The challenge is built to stretch your KQL muscles, sharpen your investigative instincts, and connect you with a vibrant community of data detectives.

How to get started:

This is more than a game. It’s a celebration of what’s possible with Kusto and Microsoft Fabric. So, gear up, detectives—the cyber world needs you.

Disclaimer: No Purchase Necessary. Must be 14+ to participate. Registration period closes on June 7th, 2025, end of day. Prizes are awarded as digital gift cards to the team leader.

Continuous Ingestion from Azure Storage to Eventhouse (Preview)

Get Data in Real-Time Intelligence Eventhouse offers a step-by-step process to guide you through importing or inspecting the incoming data, creating or editing the destination table schema, to exploration of the ingested result from multiple sources.

One of the sources from which users can bring data into an Eventhouse table using Get Data wizard is Azure Storage, which allows users to ingest one or more blobs/files from the storage account. This capability is now being enhanced with the feature of continuous ingestion, where once the connection between the Azure Storage Account and Eventhouse has been established, any new blob/file uploaded to the storage account will automatically be ingested to the destination table.

A screenshot of a computer

AI-generated content may be incorrect.

Continuous Ingestion from Azure Storage to Eventhouse, utilizes Azure Events in Fabric to listen to Azure Storage Account Events. Based on the subscribed events from Azure Events, Eventhouse pulls the corresponding newly created/renamed file from the connected Azure Storage. This simplifies the process of bringing data from your Azure Storage account as it is being generated and eliminates the need for creating and maintaining long complicated ETL pipelines. It also eliminates the need of defining time-based triggers for fetching new data from Azure storage and makes ingestion to Eventhouse near real-time.

A screenshot of a computer

AI-generated content may be incorrect.

Continuous ingestion from Azure Storage to Eventhouse is now offered in preview in Microsoft Fabric. To learn more, refer to the Get data from Azure storage documentation.

Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)

The Eventstreams feature in Microsoft Fabric Real-Time Intelligence allows you to bring real-time events into Fabric, transform them, and then route them to various destinations such as Eventhouse, without writing any code (no-code).

You can ingest data from an Eventstream to Eventhouse seamlessly either from Eventstream artifact or using Eventhouse Get Data Wizard. This capability is now being extended to support Eventstream Derived streams in direct ingestion mode.

A screenshot of a computer

AI-generated content may be incorrect.

Derived stream is a specialized type of destination that you can create after adding stream operations, such as Filter or Manage Fields, to an Eventstream. The derived stream represents the transformed default stream following stream processing. You can route the derived stream to multiple destinations in Fabric and view the derived stream in the Real-Time hub.

A screenshot of a computer

AI-generated content may be incorrect.

Direct ingestion from Derived stream allows you to ingest your event data directly into the Eventhouse without any processing. This can be configured from Eventstream, as well as from Eventhouse Get Data Wizard, including embedded Real-Time Hub in Eventhouse Get Data Wizard.

Please refer to Get data from Eventstream to learn more and get started today.

Get Data in Fabric Eventhouse from Lakehouse using OneLake Catalog

OneLake catalog is the central hub for discovering and managing Fabric content. One of the artifacts that OneLake catalog enables the discovery of is Microsoft Fabric Lakehouse, which is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location.

A screenshot of a computer

AI-generated content may be incorrect.

Get Data in Eventhouse now embeds OneLake catalog which allows an easy discovery and navigation experience for ingesting data from Lakehouse to Eventhouse. Using OneLake catalog, you can easily look for a Lakehouse through multiple workspaces and identify the Lakehouse you recently used, your favorites or endorsed by your organization. Once you select Lakehouse from the embedded OneLake catalog in Eventhouse Get Data, you can select and ingest a file from the Lakehouse seamlessly, including the files within sub folders.

To learn more, refer to the Get data from OneLake documentation and get started!

Eventhouse Accelerated OneLake Table Shortcuts (Generally Available)

Shortcuts are embedded references within OneLake that point to other files’ store locations without moving the original data.

Previously, you could create a shortcut to OneLake delta tables using Eventhouse and query the data, but performance lagged direct ingestion in Eventhouse, as shortcut queries lacked the powerful indexing and caching capabilities of Eventhouse.

Accelerated shortcuts are powered by query acceleration which indexes and caches data landing in OneLake on the fly, allowing customers to run performant queries on large volumes of data. Customers can use this capability to analyze real-time streams coming directly into Eventhouse and combine it with data landing in OneLake either coming from mirrored databases, Warehouses, Lakehouses or Spark.

Customers can expect significant improvements by enabling this capability, in some cases up to 50x and beyond.

How to enable Query Acceleration?

You will now see an option to enable Acceleration while creating a new shortcut from Eventhouse.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Query acceleration for OneLake shortcuts – overview (preview) documentation.

Databases

Introducing Cosmos DB in Microsoft Fabric (Preview)

Cosmos DB is now available in preview as a new addition to the databases workload in Microsoft Fabric.

Cosmos DB in Fabric is easy to set up, with automatic scale and secure by default, enabling you to build AI applications with less overhead. You can store and retrieve semi-structured data within milliseconds, without having to tweak the database settings manually. Equipped with built-in vector indexing and AI-ready full-text, hybrid search capabilities of Cosmos DB, you can now seamlessly build GenAI applications.

Your existing or new applications can instantly benefit from deep integration with Fabric OneLake, bringing you databases, analytics, data science, real-time intelligence, and Copilot-powered BI in one place, rather than assembling them individually. You can seamlessly join Cosmos DB data with any other data in OneLake, such as SQL DB, truly unifying your data estate.

To get started, please join our preview program by filling in this opt-in form. For more information, refer to Announcing Cosmos DB (Preview).

A screenshot of a computer

AI-generated content may be incorrect.

Data pipelines

Native change data capture (CDC) support in Copy Job (Preview)

Change Data Capture (CDC) in Copy Job is a powerful capability in Data Factory that enables efficient and automated replication of changed data including inserted, updated and deleted records from a source to a destination. This ensures your destination data stays up to date without manual effort, improving efficiency in data integration while reducing the load on your source system.

A screenshot of a computer

AI-generated content may be incorrect.

With CDC in Copy Job, you can enjoy the following benefits:

  • Zero Manual Intervention: Automatically captures incremental changes (inserts, updates, deletes) directly from the source.
  • Automatic Replication: Keeps destination data continuously synchronized with source changes.
  • Flexible Incremental Copy Options: Automatically detects CDC-enabled tables, allowing you to choose between CDC-based or watermark-based incremental copy at the table level.
  • Optimized Performance: Processes only changed data, reducing processing time and minimizing load on the source.

Learn more in the What is Copy job in Data Factory documentation.

Semantic Model Refresh Activity (Generally Available)

Semantic Model Refresh activity for data pipelines is now generally available! With this activity, you will be able to create connections to your Power BI semantic model datasets and refresh them from your data pipeline!

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Semantic model refresh activity in Data Factory for Microsoft Fabric documentation.

Copilot for Data pipeline – boost your productivity in understanding and updating pipeline with Copilot

Maintaining complex Data pipelines in your ETL project is not easy work, especially when you need to understand complicated Data pipelines created by others or you need to update some configurations for a set of pipelines.

Copilot for Data pipeline helps users quickly understand the purpose of a pipeline and the details of its activities. With this new release, it also allows users to update descriptions for pipelines and activities based on its summary. After updating, users can hover over any activity to see a simple explanation of its function.

A screenshot of a computer

AI-generated content may be incorrect.

Copilot for Data pipeline empowers users to efficiently update the settings of multiple activities within seconds which is much faster than manual update that may take hours. For example, users can update the timeout of more than ten activities inside pipeline from 12 hours to 1 hour.

To learn more, refer to the AI-powered development with Data pipeline documentation.

Mirroring

Mirroring for SQL Server On-Premises (Preview)

Mirroring for SQL Server in Fabric for on-premises versions of SQL Server 2016-2022 is now in Preview!

Mirroring in Fabric allows users to enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. Built for openness and collaboration between Microsoft, and technology solutions that can read the open-source Delta Lake table format, Mirroring is a low-cost and low-latency turnkey solution that allows you to create a replica of your SQL Server data in OneLake which can be used for all your analytical needs.

A screenshot of a computer

AI-generated content may be incorrect.

By leveraging Change Data Capture (CDC) technology available in SQL Server, mirroring service in Fabric uses on-premises data gateway (OPDG) to connect to SQL Server and read the initial snapshot as well as subsequent changes to data at the source. OPDG then pulls the data into OneLake and converts into an analytics-ready format in Fabric.

To learn more, refer to the Mirroring for SQL Server in Microsoft Fabric (Preview) blog post.

Mirroring for SQL Server 2025 (Preview)

With the announcement of Microsoft SQL Server 2025 at Build, customers can also leverage mirroring from this version in Fabric. The overall user experience is like mirroring from other SQL Server versions and Azure SQL. Mirroring for SQL Server 2025 uses change feed instead of Change Data Capture and SQL Server keeps track and replicates the initial snapshot and changes to the landing zone in OneLake which is then converted to an analytics-ready format by the mirroring engine.

To learn more, refer to the Mirroring for SQL Server in Microsoft Fabric (Preview) blog post.

A screenshot of a computer screen

AI-generated content may be incorrect.

New features for Mirroring for Azure SQL Managed Instance

We have made substantial updates to Mirroring for Azure SQL Managed Instance in Fabric. Based on user feedback, new features have been developed to address data replication needs:

  • Mirror Azure SQL Managed Instance via private endpoint: VNet data gateway or on-premises data gateway can be used as a way to connect to your Azure SQL Managed Instance database for mirroring, removing the necessity of opening public access. The data gateway ensures secure connections to the source databases via private endpoint.
  • Mirror tables without Primary Keys: We’ve relaxed the limitation to let you mirror tables even if they don’t have a primary key sometimes referred to as heap tables, offering increased flexibility.
  • Support for expanded Data Definition Language (DDL): In addition to Alter/Drop/Rename tables/column, now you can Truncate tables in the source databases while mirroring is active.

To learn more, refer to the Mirrored Databases from Azure SQL Managed Instance documentation.

A screenshot of a computer

AI-generated content may be incorrect.

Customize retention period for mirrored data

Mirroring in Fabric continuously replicates your existing data estate from various databases into OneLake in Delta Lake table format. To keep the mirrored data efficiently stored and always ready for analytics, mirroring automatically runs vacuum to remove old files no longer referenced by a Delta log.

We now offer you the flexibility to customize the retention setting according to your requirements. For instance, you may choose a shorter retention period to reduce mirroring storage consumption or extend the retention period to utilize Delta’s time travel capabilities for analytics. Currently, this value can be set via API.

To learn more, refer to the retention for mirrored data documentation.

Mirroring region expansion

Mirroring now supports all regions that are available for workloads in Microsoft Fabric. We have recently added a new region support for West US 3 to meet the growing customer demand. For detailed information about the Fabric regions that support mirroring, please refer to the supported regions documentation.

Mirroring for Azure PostgreSQL region expansion

Alongside the region expansion for all Mirroring in Fabric, Mirroring for Azure PostgreSQL will also expand region support from the initial 4 regions: Canada Central, West Central US, East Asia, and North Europe to all regions supported by Mirroring in Microsoft Fabric to ensure that customers have the best performance when replicating data from Azure PostgreSQL flexible server.

To learn more, refer to the Simplify Your Data Strategy: Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Effortless Analytics on Transactional Data.

Fabric Mirroring for Azure Cosmos DB: public preview refresh live with new features

We’re thrilled to announce the latest refresh of Fabric Mirroring for Azure Cosmos DB! This update introduces key enhancements like Microsoft Entra ID authentication, container selection, support for special characters in column names, and even vector search compatibility for AI workloads. With features like auto schema inference and full CRUD API support, this release makes it easier than ever to build secure, scalable, and real-time analytics pipelines with Cosmos DB data in OneLake.

To learn more, refer to the Fabric Mirroring for Azure Cosmos DB with new features blog post.

Dataflow Gen2

Dataflow Gen2 (CI/CD) (Generally Available)

With this new set of features now generally available, you can seamlessly integrate your Dataflow with your existing CI/CD pipelines and version control of your workspace in Fabric. This integration allows for better collaboration, versioning, and automation of your deployment process across dev, test, and production environments.

New Dataflow Gen2 item experience with the option to enable Git integration, deployment pipelines and Public API scenarios.

Key benefits

  • Automated deployments: streamline your deployment process by integrating Dataflow with your CI/CD processes in Fabric.
  • Version control: use GIT to manage and version your Dataflow Gen2, ensuring you have a history of changes and can easily roll back if needed.
  • Collaboration: enhance team collaboration by leveraging GIT’s branching and merging capabilities.
  • Multitasking support: you can now have multiple Dataflows open at the same time as other Microsoft Fabric experiences.

These new features will significantly improve your workflow and productivity when working with Dataflows Gen2 in Fabric. We look forward to hearing your feedback and suggestions as we continue to enhance this feature.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Dataflow Gen2 with CI/CD and Git integration support (Preview) documentation.

Dataflow Gen2 Public APIs (Preview)

Data Factory in Fabric now provides a robust set of APIs that enable users to automate and manage their dataflows efficiently. These APIs allow for seamless integration with various data sources and services, enabling users to create, update, and monitor their data workflows programmatically. The APIs support a wide range of operations — including dataflows CRUD (Create, Read, Update, and Delete), scheduling, and monitoring — making it easier for users to manage their data integration processes.

The APIs for dataflows in Fabric Data Factory can be used in various scenarios:

  • Automated deployment: Automate the deployment of dataflows across different environments (development, testing, production) using CI/CD practices.
  • Monitoring and alerts: Set up automated monitoring and alerting systems to track the status of dataflows and receive notifications in case of failures or performance issues.
  • Data integration: Integrate data from multiple sources, such as databases, data lakes, and cloud services, into a unified dataflow for processing and analysis.
  • Error handling: Implement custom error handling and retry mechanisms to ensure dataflows run smoothly and recover from failures.

Learn more about Dataflow APIs in the documentation.

Dataflow Gen2 parameterization (Preview)

Parameters in Dataflow Gen2 enhance flexibility by allowing dynamic adjustments without altering the dataflow itself. They simplify organization, reduce redundancy, and centralize control, making workflows more efficient and adaptable to varying inputs and scenarios. A screenshot of a computer

Description automatically generated

Leveraging query parameters while authoring Dataflows Gen2 has been possible for a long time, however, it was not possible to override the parameter values when refreshing the dataflow. The ability to pass values from a pipeline into a Dataflow parameter for refresh has been one of the top ideas in the Fabric ideas portal since Dataflow Gen2 was released.

Screenshot of the idea for "Enable to pass variables from pipelines as parameters into a Dataflow Gen2" from the Fabric ideas portal

We are happy to announce the preview of the public parameters capability for Dataflow Gen2 with CI/CD support as well as the support for this new mode within the Dataflow refresh activity in Data pipelines.

Public parameters in Dataflow Gen2 with CI/CD support allow users to refresh their Dataflows by passing parameter values outside of the Power Query editor through the Fabric REST API or native Fabric experiences. This enables a dynamic experience with Dataflows, where each refresh can be run with different parameters that affect how the Dataflow is refreshed.

To learn more about this new feature, refer to the Use public parameters in Dataflow Gen2 (Preview) documentation.

Lakehouse as an incremental refresh destination in Dataflow Gen2 (Preview)

Incremental refresh for Lakehouse destinations in Dataflow Gen2 is now in preview! This feature introduces a powerful way to optimize performance and ETL pipeline for one of the most popular data destinations to date.

With incremental refresh, users can ensure faster refresh cycles, improve system efficiency, and reduce resource consumption, making it an ideal solution for large-scale analytics and operational data scenarios. This functionality is particularly valuable for businesses leveraging Lakehouse centric solutions to consolidate structured and unstructured data into a unified data model.

To use this capability, configure your Dataflow Gen2 with a Lakehouse destination and enable incremental refresh settings within your dataflow editor as usual. Make sure to check out our documentation here to learn more about the considerations when you are using Lakehouse as a destination.

To learn more, refer to the Incremental refresh in Dataflow Gen2 documentation.

SharePoint files as a destination in Dataflow Gen2 (Preview)

SharePoint data destinations in Dataflows Gen2 is now in preview! This innovative feature empowers users to seamlessly write CSV files directly into their designated SharePoint sites, streamlining data integration and enhancing team collaboration within Office 365.

Using this new capability, users can effortlessly configure their dataflow queries to output data into specific folders within SharePoint, facilitating smoother workflows and ensuring that your data remains accessible and actionable in your operational processes.

We encourage you to explore the possibilities of this feature and provide valuable feedback to help us refine and expand its functionality. Stay tuned for more updates and improvements as we continue to evolve data destinations for Dataflows Gen2!

To start using SharePoint data destinations in Dataflows Gen2, follow these simple steps:

  • Create a Dataflow Gen2
  • Get some data from one of your data sources
  • In the destination settings, choose SharePoint as your output location.
  • Provide the URL of the specific SharePoint site where you want your CSV files to be saved.
  • Make sure you select the correct authentication method.
  • Execute the dataflow to generate and store your CSV files in the selected SharePoint destination.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Dataflow Gen2 data destinations and managed settings documentation.

Natural language to custom column

Copilot is now available within the Custom column dialog of Dataflow Gen2.

You can leverage a new Copilot experience where you can have Copilot write a custom column formula based on a prompt that you provide.

For example, for a table that has the fields **OrderID**, **Quantity**, **Category**, **Total** you can pass a prompt like the following:

If the total order is more than 2000 and the category is B, then provide a discount of 10%. If the total is more than 200 and the category is A, then provide a discount of 25% but only if the quantity is more than 10 otherwise just provide a 10% discount.

After submitting this prompt, Copilot will process it and modify the custom column formula for you and adding a name and a data type if necessary.

A screenshot of a computer

AI-generated content may be incorrect.

Be sure to give this new Copilot experience inside Dataflow Gen2 a try and share your feedback with us.

Community

Power Designer – unleash your inner report wizard (Generally Available)

PowerBI.tips, in collaboration with Microsoft Fabric, is thrilled to announce that Power Designer is now Generally Available! This is a real time-saving application you won’t want to miss! Transition from generic, standard reports to sophisticated and highly customized presentations.

What is Power Designer about?

Power Designer is sleek, intuitive, and fun, making designing reports feel less like work and more like unleashing your inner artist.

  • Craft themes like a pro: create detailed theme files for your Power BI reports with ease. Customize colors, fonts, and styles to match your brand.

A screenshot of a computer

Description automatically generated

  • Real-Time visuals: watch your Power BI visuals update live as you build your style.

A screenshot of a computer

Description automatically generated

  • Multipage mastery: add background images to each page with a snap, transforming your reports into polished, magazine-worthy layouts.
  • AI-Powered: let AI take the wheel with auto-placement of visuals in your multipage templates.
  • Preview: test your new theme on reports already published in your workspaces with the preview feature.

Now that Power Designer has been released, it’s time to jump in and start creating. Head to your Fabric Workspaces, fire up Power Designer, and let your imagination run wild.

To learn more, refer to PowerBI.tips Designer Now in Fabric – Power Designer.

Check out the YouTube Video: Introducing Power Designer: Unleash Your Inner Report Wizard!

Closing

We hope that you enjoy the update! Be sure to join the conversation in the Fabric Community and check out the Fabric documentation to get deeper into the technical details. As always, keep voting on Ideas to help us determine what to build next. We are looking forward to hearing from you!

Postagens relacionadas em blogs

Fabric May 2025 Feature Summary

junho 17, 2025 de Akshay Dixit

The Eventstreams artifact in the Microsoft Fabric Real-Time Intelligence experience lets you bring real-time events into Fabric, transform them, and then route them to various destinations such as Eventhouse, without writing any code (no-code). You can ingest data from an Eventstream to Eventhouse seamlessly either from Eventstream artifact or Eventhouse Get Data Wizard. This capability … Continue reading “Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)”

junho 17, 2025 de Dan Liu

Have you ever found yourself frustrated by inconsistent item creation? Maybe you’ve struggled to select the right workspace or folder when creating a new item or ended up with a cluttered workspace due to accidental item creation. We hear you—and we’re excited to introduce the new item creation experience in Fabric! This update is designed … Continue reading “Introducing new item creation experience in Fabric”