Microsoft Fabric Updates Blog

The August 2025 Fabric Feature Summary showcases several exciting updates designed to streamline workflows and enhance platform capabilities. Notably, users will benefit from the new flat list view in Deployment pipelines, making navigation and management more intuitive. In addition, expanded support for service principals and cross-tenant integration with Azure DevOps reflects Microsoft’s commitment to versatile and secure enterprise solutions.

Last chance to join us in Vienna – FabCon is almost sold out!

FabCon Europe is landing in Vienna from September 15–18, and the conference organizers are offering a last-minute discount on tickets.

Come connect with other data enthusiast! FabCon features 10+ full-day tutorials, a Partner pre-day, 120+ sessions from Microsoft Product Teams and the Community, the Power BI Dataviz World Championships, our swag-packed Power Hour, and an announcement-filled keynote

Email info@espc.tech by Friday August 29 to score the discount and secure your spot.

Can’t make it to Europe this year? FabCon is happening again in the United States in Atlanta. Mark your calendars for March 16-20, 2026.

Register here and use code MSCATL for a $200 discount on top of current Super Early Bird pricing!

Contents

Fabric Platform

New in UI – Flat list view in Deployment pipelines

We’ve rolled out a new ‘Flat list’ view for stage content in the Deployment pipelines! This update enables selecting items across workspace folders and offers greater clarity when using the ‘Select related’ button during deployment.

To switch between views, use the new toggle located in the top-right corner of the stage content area:

A screenshot of a computer

AI-generated content may be incorrect.

The new view will appear as exampled in the screenshot after enablement.

  • A single list displaying all workspace items.
  • A new ‘Location’ column showing each item’s full path.

Your selected view remains active even when switching between stages.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Deploy content using Deployment pipelines documentation.

Microsoft Fabric APIs Specification

I’m excited to share that we’ve successfully published the Microsoft Fabric APIs Specification in the microsoft/fabric-rest-api-specs GitHub repository!

This new repository serves as the official source for REST API specifications for Microsoft Fabric. It’s designed to provide developers with a comprehensive, well-organized, and easily accessible collection of public API specifications.

Whether you’re building custom solutions, integrating with Microsoft Fabric, or exploring its capabilities, this resource will help you efficiently understand and leverage the available APIs.

What’s Inside?

  • OpenAPI specifications for Microsoft Fabric public APIs
  • Up-to-date documentation to support development and integration
  • A central hub for feedback, collaboration, and contributions

We welcome you to explore the repository and start building with Microsoft Fabric APIs!

Service Principal and Cross-Tenant Support for Azure DevOps(Preview)

Support for service principals and cross-tenant integration with Azure DevOps is scheduled to launch in mid-September 2025.

This highly anticipated feature enables a comprehensive set of automation processes for Fabric customers. For example, Users can now automate workspace setup using tools like the Fabric CLI and Terraform provider and connect the workspace to Azure DevOps repositories—even across tenants—via service principals.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

For more in-depth information, refer to the following documentation.

Data Engineering

Autoscale Billing for Spark (Generally Available)

Microsoft Fabric has introduced Autoscale Billing for Apache Spark. This is a serverless, pay-as-you-go billing model designed to offer flexibility, transparency, and cost efficiency for Spark workloads.

With Autoscale Billing, Spark jobs run independently of your Fabric capacity and are billed only for execution time. Providing teams the freedom to scale compute without impacting shared workloads.

A screenshot of a computer

AI-generated content may be incorrect.

Why it matters:

  • Cost-efficient – Pay only for job runtime, no idle costs.
  • Dedicated Spark compute – No contention with other Fabric workloads.
  • Quota-aware controls – Manage and monitor Spark usage with Azure Quota Management.
  • Subscription-level visibility – Track CU consumption and request more when needed.

Autoscale Billing is now available in all regions that support Fabric Data Engineering workloads. Enable it today from the Fabric Capacity Settings page and start scaling Spark on your terms!

Refer to the documentation to learn how to configure Autoscale Billing.

Job Bursting Control for Fabric Data Engineering Workloads

Microsoft Fabric Data Engineering now gives capacity admins more control over job bursting. By default, Fabric allows Spark jobs to burst up to 3 x’s their base CU allocation, improving throughput for heavy workloads.

A screenshot of a computer

AI-generated content may be incorrect.

With the new ‘Disable Job-level Bursting’ switch (Admin Portal → Capacity Settings → Spark Compute), admins can now choose how Spark capacity is consumed:

How it works

  • Enabled (default): A single Spark job can use the full burst limit (up to 3X the CUs of the base capacity).
  • Disabled: Jobs are capped at base capacity, preserving concurrency and preventing one job from monopolizing resources.
  • Autoscale Billing: Bursting is not applicable, since compute runs purely on demand.

When to use it

  • Keep bursting enabled → For large ETL pipelines or heavy batch jobs needing maximum throughput.
  • Disable bursting → For interactive, multi-user notebook environments where fair sharing is more important.

This new control helps admins balance performance vs. concurrency — tailoring Spark behavior to their organization’s workload patterns.

Refer to the Admin control: Job-level bursting switch documentation to learn more about this setting.

JobInsight Diagnostics Library (Preview)

JobInsight is a Java-based diagnostic library that enables developers and data engineers to analyze completed Spark applications directly within Microsoft Fabric Notebooks.

JobInsight provides two core capabilities:

  • Interactive Spark Job Analysis
    Offers structured APIs that return execution data – such as Spark queries, jobs, stages, tasks, and executors – as Spark Datasets for deep-dive analysis.
  • Spark Event Log Access
    Allows users to copy event logs to a OneLake or ADLS Gen2 directory for long-term storage or custom offline diagnostics.

With its structured APIs, Job Insight makes it easy to investigate performance bottlenecks, debug execution issues, and extract actionable insights programmatically. Users can also save metrics on Lakehouse tables and copy logs for persistent storage. Additionally, Job Insight supports reusing past analyses and provides configuration options for handling large or deeply nested Spark event logs making it a powerful and flexible observability tool for Spark workloads.

To learn more, refer to the Gain Deeper Insights into Spark Jobs with JobInsight in Microsoft Fabric blog post.

Enhanced Monitoring for Spark High Concurrency

Enhancements to the Spark application detail view now improve monitoring for Notebooks running in high concurrency mode, whether manually or via pipeline. These changes offer better visibility in Spark applications, support efficient debugging, and aid performance tuning across multiple Notebooks.

A screenshot of a computer

AI-generated content may be incorrect.

Jobs Tab: Enhanced Job-Level Insights

The Jobs tab now offers more granular visibility into individual Spark jobs within a high concurrency session:

  • Notebook Context: When multiple Notebooks are running within the same application, the corresponding Notebook name is now displayed alongside each job.
  • Code Snippet View: Click the code icon to view and copy the code snippet associated with each job.
  • Filtering: Filter Spark jobs by Notebook to focus on one or more Notebooks within the session.

Logs Tab: Notebook-Aware Logging

To simplify debugging in shared high concurrency Spark sessions:

  • REPL ID Prefixing: Each log entry now includes a REPL ID prefix to help link logs to the appropriate Notebook.
  • Notebook Filtering: Filter logs by Notebook to inspect output more precisely – ideal for collaborative or parallel workflows.

Item Snapshots Tab: Hierarchical Notebook View

The Item Snapshots tab now provides a tree view of all Notebooks participating in a shared Spark session:

  • Browse All Notebooks: View snapshots of both completed and in-progress Notebook runs.
  • Snapshot Details: For each Notebook, you can access:
    • Code at the time of submission
    • Execution status per cell
    • Output for each cell
    • Input parameters
  • Pipeline Integration: If the application runs within a pipeline, the related pipeline and Spark activity are also displayed for better traceability.

Refer to the Enhanced Monitoring for Spark High Concurrency Workloads blog post for more information.

Use Fabric User Data Functions with Pandas DataFrames and Series in Notebooks

A major upgrade to Notebook integration with Fabric User Data Functions (UDFs) is now available:

Pandas DataFrames and Series can now be used as input and output types—thanks to native integration with Apache Arrow!

This update brings higher performance, improved efficiency, and greater scalability to your Fabric Notebooks—enabling seamless function reuse for large-scale data processing across Python, PySpark, Scala, and R.

With this release, Pandas DataFrames and Series are now supported as first-class input and output types for UDFs, enabled by deep integration with Apache Arrow, a highly efficient columnar memory format optimized for analytics workloads.

Benefits of Arrow Integration:

  • High-performance serialization: Eliminates costly JSON encoding and decoding.
  • Zero-copy data sharing: Reduces overhead during UDF execution.
  • Scalability: Easily handle millions of rows in memory.
  • Seamless compatibility: Works with your existing Pandas logic.

Instead of manually converting large datasets to JSON, developers can now natively pass Pandas DataFrames to UDFs, operate on them with minimal overhead, and return results efficiently – all while enjoying faster execution and reduced memory usage.

Refer to the Use Fabric User Data Functions with Pandas DataFrames and Series in Notebooks blog post for more information.

Notebook snapshot for running Notebooks

Notebook Snapshots now support running Notebooks, regardless of the method used to trigger them. Whether executed through pipeline activities in a high-concurrency shared session or a standard session, via a direct Notebook schedule, or initiated using NotebookUtils.run() or NotebookUtils.runMultiple(), you can now view the Notebook Snapshot in near real time.

This snapshot allows you to:

  • View the Notebook code as it existed at the time of submission.
  • Monitor cell-level execution duration, status, and output.
  • See any cell-level errors in near real-time.
  • Inspect the input parameters used for the specific run, if applicable.

This enhancement gives you deeper visibility into Notebook execution, making it much easier to understand what’s happening behind the scenes. It is especially valuable when diagnosing long-running scheduled or pipeline-triggered Notebook runs.

New capabilities

  • Pinpoint bottlenecks in execution.
  • Identify long-running cells.
  • View real-time errors alongside the corresponding code and surrounding context—even while the Notebook is still running.

This new capability is designed to help optimize performance and simplify troubleshooting for Notebook workloads.

OpenAPI spec generation in Fabric User Data Functions (Preview)

The Functions portal includes a Generate invocation code feature that allows for automatic generation of an Open API specification for Fabric User Data Functions.

A screenshot of a computer

AI-generated content may be incorrect.

The Open API specification, formerly Swagger Specification, is a widely used, language-agnostic description format for REST APIs. This allows humans and computers alike to discover and understand the capabilities of a service in a standardized format. This is critical for creating integrations with external systems, AI agents and code generators.

To access this feature, update to the latest version of the fabric-user-data-functions library within the Library Management experience.

Refer to the blog post on OpenAPI specification code generation now available in Fabric User Data Functions for more information.

New test capability for Fabric User Data Functions (Preview)

A new Test capability is available for Fabric User Data Functions. This feature enables users to test and validate functions in real-time prior to publishing. With the Test capability, you can execute your functions in a dedicated Python runtime and get immediate feedback for all your code changes including libraries and connections.

A screenshot of a computer

AI-generated content may be incorrect.

To get started, open the Functions portal and locate the mode switcher on the top right corner to switch to Develop mode. In this mode, the controls in the Functions explorer will switch from using the Run capability to the Test capability. After opening the Test panel, you can execute your functions and get their outputs, logs and errors. Once you’ve completed your functions tests, you can publish your functions for other Fabric items and users to run them.

To learn more, refer to the Test your User Data Functions in the Fabric portal (preview) documentation.

Data Science

Serve real-time predictions seamlessly with ML model endpoints (Preview)

Fabric now supports real-time inferencing with ML models via secure, scalable, and easy-to-use online endpoints. These endpoints are available as built-in properties of most Fabric models—and they require minimal setup to kick off fully managed deployments.

Users can activate and customize model endpoints with a public-facing REST API or directly from the Fabric interface. Endpoints support one-click deployment, auto-scaling out of the box, and other settings to support your custom solutions. A low-code interface enables you to test predictions easily before going live, making it simpler to integrate machine learning into real-time applications.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about this integration, check out the Serve real-time predictions seamlessly with ML model endpoints blog post or refer to the Serve real-time predictions with ML model endpoints (Preview) documentation.

Expanded Data Agent Support for Large Data Sources (Preview)

Data Agent is officially lifting restrictions on adding Data Sources with larger schema sizes. Users can now add Kusto, Semantic Model, Lakehouse, and Warehouse Data sources that contain over 100 Columns + Measures and more than 1000 Tables to the Data Agent. This change allows users to bring larger-scale databases and semantic models into Fabric’s Data Agent, unlocking deeper insights and enhanced capabilities.

The table named patient_medical_records contains a total of 103 columns, which exceeds the limit of what Fabric data agent used to support for data sources. Despite the larger schema size, users may add the patient_medical_records table to Fabric data agent. When selecting the table as an input to the data agent, users may receive a warning stating that the accuracy of results may vary with larger schema sizes. Regardless, users can select this table as an input to the data agent.

Please refer to the Expanded Data Agent Support for Large Data Sources blog post for more details.

Data Warehouse

Refresh SQL analytics endpoint Metadata REST API (Generally Available)

In July of 2025, we introduced the SQL analytics endpoint Metadata Sync REST API in GA. With this API you can programmatically trigger a refresh of your SQL analytics endpoint to keep tables in sync with any changes made in your lakehouse, native and mirrored databases, ensuring that you can keep your data up to date as needed.

To use this feature, simply pass the workspace ID, SQL analytics endpoint ID, and the API will provide detailed synchronization status for each table, including start and end times, status, last successful sync time and any error messages if applicable.

Example: How to refresh a specified SQL analytics endpoint in a workspace.

POST https://api.fabric.microsoft.com/v1/workspaces/{workspaceId}/sqlEndpoints/{sqlEndpointId}/refreshMetadata

To learn more about the REST API, checkout the Refresh SQL analytics endpoint Metadata REST API (Generally Available) blog post, Fabric REST APIs docs and the GitHub page for a code sample.

COPY INTO and OPENROWSET from OneLake (Preview)

The Preview of COPY INTO and OPENROWSET from OneLake in Microsoft Fabric Data Warehouse has been announced, providing secure, workspace-governed data ingestion and file querying without dependence on external storage services.

This new capability allows users to ingest and query files stored in Lakehouse Files folders using familiar SQL syntax — with no need for Spark, pipelines, staging storage, SAS tokens, or complex IAM configuration. With this release, Fabric takes another step toward delivering a fully SaaS-native, secure, and unified analytics platform.

A screenshot of a computer

AI-generated content may be incorrect.

Key highlights:

  • Load data from OneLake using COPY INTO directly into Warehouse tables (supports CSV and Parquet).
  • Query files using OPENROWSET for fast, ad hoc exploration — no load required.
  • Leverages Fabric workspace permissions (secured by Entra ID) — no external storage IAM or firewall configuration needed.
  • Supports cross-workspace scenarios and automation via service principals (SPNs).

Whether you’re automating ingestion pipelines, running SQL-only scenarios, or working within Private Link–enabled environments, these capabilities simplify onboarding and accelerate time-to-insight.

For example syntax, supported scenarios, and what’s coming next, check out the announcement blog post OneLake as a Source for COPY INTO and OPENROWSET (Preview).

Read JSON Lines format with OPENROWSET(BULK) (Preview)

You can now use OPENROWSET(BULK) in Microsoft Fabric Data Warehouse to read JSON Lines (JSONL) files directly. JSONL is a widely used format for logs, streaming, and machine learning data. With this enhancement, you can use the OPENROWSET function to read JSONL files natively eliminating the need to first import files as plain text and then manually apply T-SQL JSON functions to parse the data.

You can provide the URL of your JSONL file that is placed in Azure Data Lake or Fabric One Lake and read its content as a set of rows using the OPENROWSET(BULK) function:

A screenshot of a computer

AI-generated content may be incorrect.

This new capability allows you to directly query JSONL data as if you were querying a regular table, using familiar T-SQL syntax. There’s no need to first import or manually parse the file—the OPENROWSET(BULK) function handles the JSON format natively. This enhanced functionality facilitates analytics workflows by enabling more efficient and straightforward ingestion and querying of JSONL data through standard T-SQL syntax.

For more details about OPENROWSET(BULK) support for the JSONL format, please refer to the JSON Lines Support in OPENROWSET for Fabric Data Warehouse and Lakehouse SQL Analytics Endpoint (Preview) blog post.

Visual Experience for SQL Audit Logs in Microsoft Fabric Data Warehouse (Preview)

In April 2025, we introduced the Preview of SQL Audit Logs in Microsoft Fabric Data Warehouse—empowering organizations to capture critical audit events for improved transparency, governance, and control.

A major enhancement is now available: a new visual interface for configuring and managing audit logs is accessible within the Fabric Warehouse UI.

This update simplifies how customers manage auditing policies, with no scripts or advanced setup required. Whether you’re a security administrator, data platform engineer, or compliance lead, this intuitive interface makes configuring audit logging faster, clearer, and more aligned with your organization’s needs….. A close-up of a computer screen

AI-generated content may be incorrect.Key Highlights

  • Effortless Setup: Easily enable audit logging with a simple toggle and unified configuration pane.
  • Granular Event Selection: Choose from categorized event types (authentication, data access, admin actions), or fine-tune by individual action groups.
  • Flexible Retention: Configure log retention in OneLake for up to 9 years—validated directly in the UI.

This release reflects direct customer feedback and brings clarity and control to a critical governance capability.

To explore the new experience and learn how to get started, refer to the Experience the New Visual SQL Audit Logs Configuration in Fabric Warehouse blog post.

Audit Log (CRUD Operations) Naming Simplification

Microsoft Fabric will consolidate audit log operation names beginning after July 7, 2025, to facilitate governance processes.

Actions like Create Datamart or Delete Warehouse will now appear as standardized entries—such as CreateArtifact, DeleteArtifact, UpdateArtifact, etc. This change aligns with Fabric’s unified platform model and reduces noise in audit logs.

A screenshot of a computer

AI-generated content may be incorrect.

There are no changes to functionality; logging has simply been improved for greater clarity and consistency.

If you use audit logs for automation or monitoring, review and update any queries or tools using old operation names.

For details, check out the blog post: Standardizing Audit Operations for Warehouse, DataMarts and SQL Analytics Endpoint.

SHOWPLAN_XML set statement (Generally Available)

SHOWPLAN_XML is now Generally Available for the Fabric Data Warehouse and Lakehouse SQL analytics endpoint! The SHOWPLAN_XML set statement and its ‘Display estimated execution plan’ counterpart in SQL Server Management Studio are staple tools for investigating the details of a T-SQL queryplan: providing information on planned data movements, resource estimates, operator choices, and more.

A screenshot of a computer

AI-generated content may be incorrect.

View of query output after running SET SHOWPLAN_XML ON.

A screenshot of a computer

AI-generated content may be incorrect.

View of graphical plan in SQL Server Management Studio after opening showplan XML (as .sqlplan) or clicking ‘Display Estimated Execution Plan’ button.

For additional information, refer to the SET SHOWPLAN_XML (Transact-SQL) documentation and the SHOWPLAN_XML in Fabric Data Warehouse (Preview) blog post.

Real-Time Intelligence

Additional analytics on Activator activations

With Activator, users are now able to view additional analytics and KPIs related to their activations. By navigating to the History pivot in the rules section of Activator, these analytics can be accessed.

Inserting image...

Select ‘View details’ in your Teams messages and emails will take you to the History tab of your alert in Activator, where you’ll be able to see those analytics listed.

A black box with white text

AI-generated content may be incorrect.

When selecting ‘View details’, the information on the History tab will be filtered to show information that is related to the specific activation you are viewing details on. It will show information related to the specific ID you have selected and the relevant time range.

This feature is available for both rules that were created on attributes and rules that were created on streams.

To learn more, refer to the Create a rule in Fabric Activator documentation.

Tailor Your Data Schema Tree in Queryset

Exploring your data sources in Queryset just got easier. The data tree now comes with a refreshed UI that gives you more control over how your connections are displayed – helping you focus faster and write with context.

You can now toggle between two views:

  • Flat list of all connected databases.
  • Grouped view by cluster or data source.

A screenshot of a computer

AI-generated content may be incorrect.

Making it easier to navigate, especially when working across multiple environments.

Each data source can still be expanded to explore its schema, and with a simple double-click on any entity, the full path is instantly copied into your query editor. You can also switch the active context for your query at any time, directly from the tree.

To learn more about this feature, refer to the Query data in a KQL queryset documentation.

Database tree in Edit Tile and AzMon data sources

Writing queries in Real-Time Dashboards just got significantly easier. The Edit Tile experience now includes a data source tree – bringing full visibility into your connected data right where you work.

With this new pane, you no longer must rely solely on memory or IntelliSense. As you build your query, you can browse your data source in real time – including tables, columns (with types), functions, materialized views, and more.

This adjustment is particularly effective when dealing with new schemas or extensive environments.

  • Explore the full structure of your connected data.
  • Double-click to insert entity paths directly into your query.
  • Confidently discover and use available resources without context-switching.

We’ve also added a simplified experience for connecting to Azure Monitor data sources. Previously, this required configuring connection strings manually. Simply enter your Azure resource details into the built-in connection string builder to get started.

For more information, refer to the Add title documentation.

Streamlining Query Sharing with Queryset

A new, streamlined user interface has been introduced for sharing queries in Queryset. This update enhances clarity and usability by providing a preview of the content being copied to the clipboard. Users can conveniently select between copying the raw query, a deep link, results, or any combination thereof.

A screenshot of a computer

AI-generated content may be incorrect.

When you share a deep link, recipients can open it directly in Fabric—automatically connected to the right data source, with your query loaded and ready to run. It’s the fastest, most reliable way to share and collaborate.

This update makes sharing more transparent and helps users adopt best practices that scale across teams.

To learn more, refer to the Share queries documentation.

New Settings for Eventhouse Accelerated Shortcuts: MaxAge and HotWindows

Accelerated OneLake Table shortcuts caches and indexes data as it lands in OneLake, providing performance comparable to ingesting data in Eventhouse. By using this feature, you can accelerate data landing in OneLake, including existing data and any new updates.

Two additional settings are now available to facilitate the acceleration of your shortcuts.

HotWindows

This allows for accelerating arbitrary time windows e.g. between (X .. Y) dates and not just last ‘N’ days of data. Delta data files created within these time windows are accelerated.

MaxAge

Users set the data’s latency tolerance, controlling freshness. The shortcut returns accelerated data if the last index refresh is newer than @now – MaxAge. Otherwise, the shortcut table operates in non-accelerated mode.

Syntax

.alter external table MyExternalTable policy query_acceleration ‘{“IsEnabled”: true, “Hot”: “1.00:00:00”, “HotWindows”:[{“MinValue”:”2025-07-06 07:53:55.0192810″,”MaxValue”:”2025-07-06 07:53:55.0192814″}], “MaxAge” : “00:05:00”}’

To learn more about .alter query acceleration policy command, refer to the documentation.

Event Schema Registry in Fabric Real-Time Intelligence (Preview)

This month, Event Schema Registry was introduced as a centralised resource for discovering, storing, and updating schemas for event-driven data flows. With this release, users can create and manage schemas within SchemaSets in Fabric Workspaces.

A screenshot of a computer

AI-generated content may be incorrect.

Capabilities

  • Auto-discover schemas from Azure SQL CDC sources.
  • Use discovered schemas in transformations.
  • Create destination tables in Eventhouse directly from registered schemas.
  • Validate message formats before ingestion to catch errors early in the data pipeline.
  • Protect data ingested from custom endpoints.

Try creating a Schemaset in the Event schema registry from the Real-Time Hub. Then, import some schemas, and start sending events based on the schemas to an Eventstream.

To learn more, visit the Fabric documentation for Event Schema Registry. Check out our new blog post for a deep dive into the schema management concepts and how to build your first type-safe, schema aware data pipeline for publishing events from a custom endpoint to an Eventstream.

Once you have explored the feature, provide your feedback. We welcome your feedback and aim to make managing data schemas straightforward.

Databases

Optimizing Query Management: New Controls in the Editor

As part of our ongoing commitment to improving the SQL database in Fabric experience based on direct user feedback and workflow needs, we’ve introduced several new features to the query editor designed to streamline common tasks, enhance team collaboration, and offer greater flexibility when working between tools.

Latest Improvements

  • Bulk delete query: This feature enables users to delete multiple saved queries at once, eliminating the need to remove them individually. It was introduced in response to user feedback highlighting the difficulty of managing large lists of saved queries without a multi-select delete option. In the query editor’s ‘Queries’ folders, users can now hold Shift, select multiple queries, and right-click to delete them all in a single action. This streamlines the cleanup process, making it easier to maintain an organized and clutter-free workspace with minimal effort.

A screenshot of a computer

AI-generated content may be incorrect.

  • Open your database in SQL Server Management Studio (SSMS): This feature integrates the Fabric SQL web-based editor with SSMS, allowing for a smooth transition to the desktop environment. With a single click from the query editor, SSMS launches and automatically fills in the connection details for your Fabric SQL database — no manual copy-paste or setup required. This streamlines the workflow for users who prefer or need the advanced capabilities and richer UI of SSMS, making it faster and easier to switch between tools while working with Fabric SQL.

A screenshot of a computer program

AI-generated content may be incorrect.

  • Query sharing within your workspace: This feature enables collaborative use of SQL queries within a Fabric workspace. You can now save a query and share it with other users in the same workspace, allowing teammates to view, run, or edit it —based on their permissions — directly from the query editor. When a query is marked as shared, it moves into this section and becomes accessible to workspace admins, members, and contributors. This eliminates the need to copy-paste SQL code or share snippets via email, making collaboration faster, easier, and more secure.

A screenshot of a computer

AI-generated content may be incorrect.

Use Python Notebooks to Read/Write to Fabric SQL Databases (Preview)

You can now read from and write to SQL databases in Microsoft Fabric using Python Notebooks, thanks to the new integration with the T-SQL magic command. This highly requested feature enables users to run powerful T-SQL queries directly within notebooks—combining scripting, visualizations, and explanatory text in one collaborative workspace. It supports rich, interactive charts, automated workflows, scheduled jobs, and secure sharing, making it easier than ever to analyze and operationalize SQL data seamlessly across the Fabric platform.

A screenshot of a computer

AI-generated content may be incorrect.

To learn more, refer to the Connect to your SQL database in Fabric using Python Notebook blog post and check out the Run T-SQL code in Fabric Python notebooks for more details.

Data Factory

Easily Manage pipeline Triggers

Triggers in Fabric Data Factory pipeline allow you to automate your pipelines to run whenever events occur in Fabric such as file arrival, file delete, Lakehouse folder events, and pipeline events. Now to make management of your pipelines easy, we’ve added a panel on your pipeline that opens with the Trigger button from the pipeline canvas to view existing triggers that exist on your pipeline, making trigger management super quick and easy.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer program

AI-generated content may be incorrect.

To learn more about this feature, refer to the Data pipelines event triggers in Data Factory documentation.

Fabric Simplifies Terminology Data pipelines Now Referred to as pipelines

As we continue to evolve and expand the vision and capabilities of Fabric, we see many new and exciting use cases emerge from our incredible customer base. In Data Factory pipelines in Fabric, we’ve found many new ways to leverage pipelines that aren’t always directly related to data integration scenarios including business workflows and automation scenarios.

To help provide a clearer understanding of the role of pipelines in Fabric workflows, we are going to drop the ‘data’ term from the ‘data pipelines’ display name in Fabric workspaces. This change will take effect in September. Only the display name is being updated from ‘Data pipeline’ to ‘Pipeline’ in workspace lists and filters, with no impact on APIs or CICD.

A screenshot of a computer

AI-generated content may be incorrect.

For more information, please refer to the Concept: Data pipeline Runs documentation.

Reset Incremental Copy from Copy job

Incremental copy is one of the most loved features in Copy job. It dramatically boosts efficiency by transferring only new or updated data, saving you time, resources, and manual effort. The process is simple: the first run performs a full data load, and every run after that moves just the changes.

Now you have greater flexibility in managing incremental copy, including the ability to reset it back to a full copy on the next run. This is incredibly useful when there’s a data discrepancy between your source and destination—you can simply let Copy Job perform a full copy in the next run to resolve the issue, then continue with incremental updates afterward.

Even better, you can reset incremental copy per table, giving you fine-grained control. For example, you can re-copy smaller tables without impacting larger ones. This means smarter troubleshooting, less disruption, and more efficient data movement.

A screenshot of a computer

AI-generated content may be incorrect.

What is Copy job in Data Factory for Microsoft Fabric? Find out more in our documentation.

Auto Table Creation on Destination from Copy job

We’re on a mission to eliminate every friction point and make your data movement experience as smooth and intuitive as possible. Auto Table Creation is one of the steps in that direction. If the specified table doesn’t exist at your destination, Copy Job will automatically create the table and its schema for you. No manual setup, no interruptions, just effortless data movement from start to finish.

A screenshot of a computer

AI-generated content may be incorrect.

Copy Job can now automatically create tables on the following destination stores:

  • SQL Server
  • Azure SQL database
  • Fabric Lakehouse table
  • Snowflake
  • Azure SQL Managed Instance
  • Fabric SQL database

What is Copy job in Data Factory for Microsoft Fabric? Find out more in our documentation.

JSON format Support in Copy job

Copy job is the tool for you to move data from any source to any destination in any format. You can always choose binary copy between storage locations for any file format with highly optimized throughput. You can also copy files like CSV or Parquet to and from tables—and now, the same capability is available for JSON file formats.

A screenshot of a computer

AI-generated content may be incorrect.

What is Copy job in Data Factory for Microsoft Fabric? Find out more in our documentation.

Create CI/CD-Enabled Dataflow Gen2 from Existing Dataflow Gen2 (Generally Available)

Saving a new Dataflow Gen2 with CI/CD support from a Dataflow Gen2 is now generally available.

Customers often would like to recreate an existing dataflow as a new dataflow Gen2 (CI/CD), getting all the benefits of the new GIT and CI/CD integration capabilities. Today, to accomplish this, they need to create the new Dataflow Gen2 (CI/CD) item from scratch and copy-paste their existing queries or leverage the Export/Import Power Query template capabilities. This, however, is not only inconvenient due to unnecessary steps, but it also does not carry over additional dataflow settings.

Dataflows in Microsoft Fabric now include a ‘Save as’ feature, that in a single click lets you save an existing Dataflow Gen2 as a new Dataflow Gen2 (CI/CD) item.

Learn more about Save As Dataflow Gen2 (CI/CD): Migrate to Dataflow Gen2 (CI/CD) using Save As

Screenshot of the context menu under the ellipsis, showing the Save as Dataflow Gen2 (CI/CD) option.

Integrated Run History and Validation Feedback in Dataflow Gen2 Editor

For Dataflow Gen2 with CI/CD support, one of the most common experiences that you can have today when using a Dataflow is checking the results of its validation and the run operation. A few months ago, we introduced a new way to validate your Dataflow without ever leaving the Dataflow window, but for run operations you still had to go into the monitoring hub or the workspace list to see the ‘Recent runs’. This changes today with the introduction of new and improved embedded experiences inside of the Dataflow editor.

A screenshot of a computer

Description automatically generated

In the home tab of the ribbon, you will now find two new entries in the Dataflow group:

  • Check validation
  • Recent runs

The Check validation allows you to see the status of your last or ongoing saving operations and the validations that happen against it.

A screenshot of a computer

Description automatically generated

Selecting the Check validation button in the ribbon, rather than Save & Run, allows users to review the status of this operation.

A screenshot of a computer

Description automatically generated

The status bar displays the progress of the validation process as it occurs:

Once you trigger a run for your Dataflow, you will also be able to see the progress of the run in the status bar. Once finished, you can see a notification that tells you the timestamp when the last run happened.

A screenshot of a computer

Description automatically generated

Finally, a new addition to this experience is the explicit button to close your Dataflow and discard any changes that you’ve been working on as well as other operations to simplify your interaction when authoring a Dataflow such as ‘Save, run & close; and ‘Save & close’:

A screenshot of a computer

Description automatically generated

These changes are currently rolling out to all production regions.

Improvements to SharePoint as a destination in Dataflow Gen2

As part of our ongoing improvements, we’ve removed system folders from the navigator when selecting a folder for your SharePoint data destination. This enhancement streamlines your folder selection experience, ensuring only relevant user folders appear, reducing clutter and making it faster and more intuitive to pinpoint exactly where your data should go.

A screenshot of a computer

AI-generated content may be incorrect.

As we continue to roll out these enhancements and streamline your data experiences, we encourage you to share your thoughts and let us know how we can further improve. Your feedback is always welcome!

For more information, refer to the Dataflow Gen2 data destinations and managed settings documentation.

New Category Filters Added to Template Gallery

Discover templates faster and transform your workflow with our latest upgrade to the Template Gallery: category-based filtering! With this new feature, you can now browse templates by category—making it easier than ever to find exactly what you need when you need it. Whether you’re starting a new project, exploring solutions, or just curious about what’s available, categories take the guesswork out of template discovery.

Simply select Filter, select a category, and watch your search results instantly become more relevant. Dive in and experience a more organized, efficient, and intuitive way to kick off your next data project!

A screenshot of a computer

AI-generated content may be incorrect.

To learn more about templates and pipeline template gallery, read Templates – Microsoft Fabric | Microsoft Learn.

Streamline Data Source Setup with Copilot’s New ‘Get Data’ Capability

With the power of new get data Copilot capability supported in Dataflow Gen2, users can quickly connect to existing resources or set up new connections with ease.

When you request Copilot to connect to a data resource, it first checks if the resource already exists. If found, dataflow gen2 can access and guide you through navigation and data preview. If not, it launches a filtered get data wizard to help you locate the correct resource efficiently.

For example, if you know the exact name of an existing SQL database connection, Dataflow Gen2 will enable you to quickly access its server and database navigation with the assistance from Copilot.

A screenshot of a chat

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

If you are unsure whether a SQL connection exists or need to create a new one, Copilot will guide you through the process efficiently.

A screenshot of a chat

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

If a connection does not exist, Copilot will share information with Dataflow Gen2 to open a pre-filled setup page to help you create one quickly.

A screenshot of a chat

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

Learn more about Copilot for Data Factory in Get started with Copilot in Fabric in the Data Factory workload.

関連するブログ記事

August 2025 Fabric Feature Summary

11月 10, 2025 作成者: Arun Ulagaratchagan

SQL is having its moment. From on-premises data centers to Azure Cloud Services to Microsoft Fabric, SQL has evolved into something far more powerful than many realize and it deserves the focused attention of a big stage.  That’s why I’m thrilled to announce SQLCon, a dedicated conference for database developers, database administrators, and database engineers. Co-located with FabCon for an unprecedented week of deep technical content … Continue reading “It’s Time! Announcing The Microsoft SQL Community Conference”

11月 3, 2025 作成者: Arshad Ali

Additional authors – Madhu Bhowal, Ashit Gosalia, Aniket Adnaik, Kevin Cheung, Sarah Battersby, Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis. … Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)”