Microsoft Fabric Updates Blog

Fabric July 2025 Feature Summary

Welcome to the July 2025 Fabric Feature Summary! This month’s update covers major events like the return of the Microsoft Fabric Community Conference in Vienna, and the 10th anniversary of Power BI. Key platform enhancements include new domain tags and updates to the default category in the OneLake catalog. You’ll also find highlights on data science developments, such as Fabric data agent integration with Microsoft Copilot Studio. Explore the innovations shaping the future of Fabric in this month’s edition.

Contents

Events and Announcements

The Microsoft Fabric Community Conference is back x2!

Join us in Vienna! With 10 full day Tutorials, a Partner pre-day, more than 120 sessions from Product Teams and the Community, AND our swag filled Power Hour, FabCon Europe will be landing in Vienna, Austria on September 15 – 18, 2025.

Register with code FABCOMM to save €200! Early bird pricing ends July 31st!

Can’t make it to Europe this year? FabCon is happening again in the United States in Atlanta. Mark your calendars for March 16-20, 2026.

Register here and use code MSCATL for a $200 discount on top of current Super Early Bird pricing!

Power BI Turns 10!

On July 24th, the Fabric Community gathered to celebrate Power BI’s 10th birthday. If you missed it, there’s still time to join in on the fun.

Fabric Platform

Domain tags

Microsoft Fabric data mesh architecture supports organizing data into domains and sub domains, helping admins to manage and govern the data per business context with various delegated settings. Domains and sub domain structure enables data consumers to filter and discover content from the area most relevant to them.

We have taken the data mesh architecture a step further by allowing each domain to have its own list of tags tailored to their specific business context and needs.

Tenants and domain administrators can now create a list of tags within each domain. These tags are available for users to apply to items associated with the domain. Consumers can filter and search for items by tags, facilitating the discovery of specific content and improving overall discoverability.

Refer to the documentation to learn more about Fabric domains, and Tags in Microsoft Fabric.

Updated Default Category in the OneLake catalog

To streamline discovery, the OneLake catalog now applies smarter default category selections based on your entry point. Power BI users are presented with the Insights category by default, which includes tools designed to facilitate data analysis, visualization, and reporting to support data-driven decision making. Fabric users will continue to see the Data category by default, which includes raw data sources, structured databases, and other foundational assets. If a different category is selected, either previously or in the future, that choice will be saved and maintained across sessions.

This change reduces friction by automatically showing the most relevant content, so you no longer need to manually switch categories when first entering the catalog.

Learn more about category behavior in the OneLake catalog documentation.

Data Science

Fabric data agent integration with Microsoft Copilot Studio

This integration brings a powerful new capability for agents to operate effectively across tools, redefining how organizations build, deploy, and scale intelligent agents across their enterprise data ecosystems. Data Agents in Fabric are AI-powered assistants that can synthesize enterprise data, understand data schemas, enforce governance policies, and interpret business contexts to surface timely, relevant, and actionable insights.

By embedding these agents into Copilot Studio, organizations can build intelligent agents deeply informed by their most trusted data sources, enabling agent-to-agent collaboration and leveraging model context protocols for richer, more complete answers. This integration empowers business users to ask questions and get data-driven answers within their chat windows, accelerating innovation and ensuring data consistency across departments.

Video titled: Enrich custom agents in Microsoft Copilot Studio with insights from Fabric data agents

To learn more about this integration, check out the Fabric Data Agents + Microsoft Copilot Studio: A New Era of Multi-Agent Orchestration (Preview) blog post, or refer to the Consume a Fabric Data Agent in Microsoft Copilot Studio (Preview) documentation.

Data Source Instructions for the Fabric Data Agent

The Data Source Instructions feature in Fabric’s Data Agent empowers users to deliver more accurate and relevant answers from their structured data. By allowing instructions to be scoped at the data source level, users can guide the AI on how to query specific tables, apply filters, interpret column values, or join datasets correctly.

This targeted configuration ensures the agent has the right context when interacting with each data source, reducing ambiguity and improving the quality of responses—especially in environments with multiple schemas or complex business logic. Whether you’re working with a Lakehouse or an Eventhouse KQL database, data source instructions help the AI act with greater precision and reliability.

To learn more about how to use the new data source instructions, refer to the Data Agent configurations documentation.

Get answers faster with streaming results in the Fabric Data Agent

The Data Agent now supports streaming results, allowing users to see live, incremental updates as their query is processed. Instead of waiting for the final answer, users can watch each step—like Data Source selection, query generation, and execution—unfold in real time. This provides faster access to partial results, greater transparency into how answers are generated, and improved troubleshooting for long-term or complex queries.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about how to use the Data Agent refer to the Data Agent tutorial.

Improved run execution flow for Data Agent

We’ve rolled out an enhanced run experience that makes it easier to follow how your query is processed from start to finish. This update includes improved naming for each run step, a cleaner visual layout, and streamlined backend execution. These changes make it simpler to understand the agent’s reasoning, while behind-the-scenes optimizations help deliver results faster than before.

To learn more about how to use the data agent, refer to the data agent tutorial.

Real-Time Intelligence

Simplified Rule and Object Creation Experience

This intuitive and streamlined experience allows you to create rules at the desired level of data tracking. Whether you wish to monitor data at the event level or object level, you now have the flexibility to do so as you build your rule.

One of the major improvements includes the automatic display of the rule definition pane once you connect to a stream of events. You can immediately start defining rules without the extra step of manually opening the pane.

To address your feedback on the object creation experience, this object creation has also been directly integrated into the rule definition process. This minimizes the complexity of needing to create an object before defining a rule. Like the onramp experiences, you now have the option to group data by specific fields during rule creation.

By addressing these key areas of feedback, we aim to make rule creation not only faster but also more user-friendly, ensuring that the native user experience is as efficient and enjoyable as possible.

To learn more about creating rules in Activator, refer to our Create a rule in Fabric Activator documentation.

Send Alerts to Teams Group and Channel

With Activator, you can automate your business process like sending email or Teams alerts, running a notebook or pipeline, and executing Power Automate flows when certain data conditions are met.

What’s new?

One of the most popular actions is sending Teams messages. We are now expanding this functionality to enable the sending of alerts not only to individuals, but also to group chats and channels. With this improvement, you can better incorporate automatic alerts in your business process by sending alerts to existing chats or channels where relevant topics are discussed.

How it works

If you are creating an Activator rule from one of the embedded experiences, like Power BI, Real-Time Hub, or KQL query set, you may not see Teams group chat and channel as available action types. To send Teams alert to groups and channels, you can first create the rule with any action type. Once the rule is successfully created, click on ‘Open’ to open the core Activator experience and select Teams group or channel as preferred action.

To learn more, please refer to the Allowed chats and channel for Teams notifications documentation.

Embedded experience in Power BI

Embedded experience in Real-Time Hub and KQL query set

Try it out and share your feedback!

To try this feature now, head over to Fabric. We look forward to hearing from you, if you have any feedback or ideas, join the discussion in the Activator community.

Pass Parameter Values to Fabric Items (Preview)

Activator enables you to automatically activate Fabric items like pipeline and notebook whenever certain data conditions are met.

What’s new?

Today, you can not only activate and execute Fabric items but also pass values to the parameters defined in your Fabric items. You can pass hardcoded values or dynamic values from the data source. With this improvement, you can run pipelines and notebooks utilizing the event details that trigger the activation, unblocking advanced use cases, and increasing scalability.

How it works

When selecting Fabric item as an action to your Activator rule, you will now see a section called ‘Parameters’ where you can select ‘Edit action’ and define the parameter values.

If you are creating an Activator rule from one of the embedded experiences, like Power BI, Real-Time Hub, or KQL query set, you can first create the rule with any action type. Once the rule is successfully created, select ‘Open’ to open the core Activator experience and configure the action with parameters.

Refer to the Pass parameter values to Fabric items documentation for more details.

Embedded experience in Power BI

Embedded experience in Real-Time Hub and KQL query set

Try it out and share your feedback!

To try this feature now, head over to Fabric. We look forward to hearing from you, if you have any feedback or ideas, join the discussion in the Activator community.

Data Factory

Incremental Copy in Copy job (Generally Available)

Incremental copy is one of the most popular features in Copy job, significantly improving efficiency by transferring only new or changed data—saving time and resources with minimal manual effort. When you select incremental copy, the first run performs a full data copy, and subsequent runs move only the changes.

  • For databases, this means only new or updated rows are transferred. If CDC is enabled, inserted, updated, and deleted rows are included.
  • For storage sources, only files with a newer LastModifiedTime are copied.

With the general availability of incremental copy in Copy job, you can feel free to use that in any production environment. Along with this release, a new meter: Data Movement – Incremental Copy will take effect with a consumption rate of 3 CU, during which only delta data is moved using a more efficient approach that largely reduces the processing time. The Full/Batch Copy functionality will continue to emit usage on the existing meter: Data Movement with a consumption rate of 1.5 CU.

You can find more details in Pricing for Copy job.

Upsert Data to Fabric Lakehouse table and other data stores in Copy job

You can now choose to merge data directly into more destination stores including Fabric Lakehouse table, Salesforce, Salesforce Service Cloud, Dataverse, Dynamics 365, Dynamics CRM, and Azure Cosmos DB for NoSQL. These give you further flexibility to tailor data ingestion to your specific needs.

For more information, refer to the Supported connectors documentation.

More connectors, more possibilities in Copy job

More source and destination connections are now available, giving you greater flexibility for data ingestion with Copy job. We’re not stopping here—even more connectors are coming soon!

  • SFTP
  • FTP
  • IBM Db2 database
  • Oracle Cloud Storage
  • Dataverse
  • HTTP
  • Dynamics 365
  • Dynamics CRM
  • Azure Cosmos DB for NoSQL
  • Azure Files
  • Azure Tables
  • ServiceNow
  • Vertica
  • MariaDB
  • Azure Cosmos DB for MongoDB
  • MongoDB Atlas
  • Mongo DB
  • ODATA
  • SharePoint Online list
  • Dynamics AX
  • Azure AI search

For more information, refer to the Supported connectors in Copy job.

Now supported in Copy job – Copy data into Snowflake and Fabric Data Warehouse from On-Premises

Previously, when trying to copy data from on-premises data stores into data warehouses like Snowflake or Fabric Data Warehouse, the Copy job UI would indicate that this scenario was not yet supported. That limitation has now been removed — it now works!

The improvement comes from native support for staging copy. Behind the scenes, data is first copied from the on-premises source (via Data Gateway) to staging storage in Fabric OneLake, where it is automatically shaped to meet the format requirements of the COPY statement used by Snowflake and Fabric Data Warehouse. Then, the COPY statement is invoked to load the data from staging into the target warehouse — delivering a seamless, end-to-end data movement experience.

With this enhancement, previously unsupported scenarios—such as copying data from on-premises sources to warehouse destinations like Snowflake or Fabric Data Warehouse—are now fully supported, with no manual intervention required.

Manual Control of Auto-Refresh in pipelines

Manual control for the auto-refresh feature in Data pipeline activities is now available. Previously, when you run your pipeline, the Output and Monitoring activity lists will refresh automatically for 5 minutes as demonstrated in the screenshot.

What’s new?

When executing a pipeline, the Output and Monitoring tabs continue to refresh the activity list in real time by default. However, users now have the option to disable auto-refresh if desired.

Why turn off auto-refresh?

If you’re scrolling through a pipeline with many activities, auto-refresh can be disruptive – especially if the list keeps jumping as new items load. That’s why we’ve added a ‘Turn off auto-refresh’ button right in the activity list banner.

You’ll be able to find this feature in both the Output tab and in the Monitoring Hub for pipelines.

How it works

You can access the manual refresh feature from either the Output tab when you run a pipeline or when you’re monitoring your active pipelines in the monitoring hub experience.

Output tab:

  1. When a pipeline starts running, a banner appears indicating that auto-refresh is active.
  2. Select ‘Turn off auto-refresh’ to stop the automatic updates.
  3. The banner updates to reflect that auto-refresh is disabled.
  4. You can still manually refresh the activity list at any time.
  5. The banner will automatically close when the pipeline is completed or cancelled.

Monitoring Hub experience:

  1. Navigate to your running pipeline within Monitoring Hub, and you’ll find the same banner that indicates auto-refresh is active.
  2. The same controls apply – select the ‘Turn off auto-refresh’ button if you want to explore the activity list without interruptions.

To learn more about monitoring your pipeline runs, please visit the documentation on How to monitor pipeline runs in Monitoring hub.

Mirroring

Mirroring for Azure SQL Database protected by a firewall (Generally Available)

Mirroring for Azure SQL Database with Virtual Network (VNet) Data Gateway and On-Premises Data Gateway (OPDG) are now generally available!

Network security and secure data transmission are crucial for enterprise customers handling sensitive information. Using the VNet gateway or OPDG, you can mirror Azure SQL Database protected by a firewall with secure connections established to your source databases through a private endpoint or from a specific private network.

To learn more about Mirroring for Azure SQL Database, refer to the Mirrored Databases from Azure SQL Database documentation.

We are actively working to improve and expand the data gateway support for other mirrored sources in Fabric, stay tuned.

Resume mirrored database when Fabric capacity restores

We’ve heard your feedback regarding the friction that upon pausing and resuming Fabric capacity, the mirrored database displays an inaccurate running status and requires a workaround to verify its actual state and to resume its functionality. We are pleased to update you that this experience has now been improved.

Once Fabric capacity is resumed, you will see a ‘Paused’ status on the mirrored database, and you can click the ‘Resume replication’ button to continue the process. Mirroring will proceed from the point at which it was previously paused. Note that if the capacity remains paused for a long time, mirroring may not resume where it left off and will reseed data from the beginning. This situation may arise, for example, if the database transaction log becomes full.


Learn more from Monitor Mirrored Database and Fabric capacity change behaviors.

Customize retention period for mirrored data in Fabric portal

In May 2025, we announced that you can configure the retention period for mirrored data via public API according to your needs. Now the setting is available on the UI as well. To check or update the retention setting, in the Fabric portal, navigate to your mirrored database -> Settings -> Maintenance tab as shown below, and specify the retention threshold.

To learn more, refer to the retention for mirrored data documentation.

Databases

Cosmos DB (NoSQL) in Fabric (Preview)

We’re excited to announce the preview of Azure Cosmos DB (NoSQL) in Microsoft Fabric, bringing globally distributed, low-latency, high-throughput data to the Fabric ecosystem. This integration allows developers and data teams to connect Cosmos DB directly to Fabric workloads like Lakehouses, Notebooks, and Power BI—unlocking powerful, real-time analytics on operational data without complex ETL. With this preview, customers can seamlessly analyze NoSQL data alongside other enterprise sources, and accelerate the development of modern, AI-driven applications on a unified platform.

To learn more about Cosmos DB, check out the Announcing Cosmos DB in Microsoft Fabric Featuring New Capabilities! (Preview) blog post, or refer to the What is Cosmos DB in Microsoft Fabric (preview)? documentation.

Video titled: Build Intelligent Apps with Cosmos DB in Microsoft Fabric

Liittyvät blogikirjoitukset

Fabric July 2025 Feature Summary

marraskuuta 10, 2025 tekijä Arun Ulagaratchagan

SQL is having its moment. From on-premises data centers to Azure Cloud Services to Microsoft Fabric, SQL has evolved into something far more powerful than many realize and it deserves the focused attention of a big stage.  That’s why I’m thrilled to announce SQLCon, a dedicated conference for database developers, database administrators, and database engineers. Co-located with FabCon for an unprecedented week of deep technical content … Continue reading “It’s Time! Announcing The Microsoft SQL Community Conference”

marraskuuta 3, 2025 tekijä Arshad Ali

Additional authors – Madhu Bhowal, Ashit Gosalia, Aniket Adnaik, Kevin Cheung, Sarah Battersby, Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis. … Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)”