Microsoft Fabric Updates Blog

We’ve got a lot of exciting updates this month. To name a few, NotebookUtils session management utilities, Enhancing COPY INTO operations with Granular Permissions in Data Warehouse, Application Lifecycle Management (ALM) and Fabric REST APIs. Keep reading to hear about everything we have in store for you this month.

Microsoft Fabric Community Conference 2025

After 2 consecutive sold-out events, FabCon returns bigger than ever to Las Vegas from March 31 to April 2, 2025. 215+ sessions, 4 keynotes, and 20 workshops to take you from Power BI to Fabric to AI and help you make the most of Copilot and SQL in Fabric.

Plus, one-on-one time with Microsoft experts and community legends, a FREE pre-day for partners, and the famous Power Hour.

Prices go up on February 11, register ASAP and use MSCUST to get $150 off.

First-ever Power BI DataViz World Championships

Are you ready? The first-ever Power BI Data Visualization World Championships are coming to FabCon Vegas! Participate to learn and win a chance to compete live on-stage! Stay tuned to the World Championships blog for more details.

Free training and discount certification vouchers for DP-700!

The Fabric Data Engineer Certification is now generally available! The best way to prepare for Exam DP-700 is to join Microsoft Fabric experts for live and on-demand sessions. Sessions start this week. Register now and save your spot!

Ready to take the exam now? Head over to the community to request your discount voucher for Exam DP-700.

 

Contents

Power BI

Copilot and AI

Unlock suggested questions from standard prompts in Copilot

When launching Copilot or using the prompt guide, you can select from standard prompts. In preview mode, a new prompt ‘Answer a question about the data’ will be available coming end of January.

Selecting this prompt will unlock 3 suggested questions to help you explore your data.

You can continue to select the prompt by scrolling up or from the prompt guide (book icon) to generate 3 more suggested questions if none of the first set are interesting to you.

Authors can also personalize these suggested questions using Q&A setup in Desktop for a particular semantic model. With this feature, suggested questions will now show up in both Copilot and Q&A visual.

A screenshot of a computer

AI-generated content may be incorrect.

You can continue to select the prompt by scrolling up or from the prompt guide (book icon) to generate 3 more suggested questions if none of the first set are interesting to you.

Authors can also personalize these suggested questions using Q&A setup in Desktop for a particular semantic model. With this feature, suggested questions will now show up in both Copilot and Q&A visual.

Reporting

Explore this data: new entry point from a visual

Exploring your data is easier than ever, now that we’ve added an Explore this data option to the visual options menu. This lightweight and focused experience allows users to launch Explore and easily tweak their visual (change chart type, add new data, filter, and more!) and see the underlying data, making it easy to get the answers they need without all the distractions and extra complexity of reports. Learn more about how to use Explore here!

Simply select ‘Explore this data’ in the ‘More options’ menu and start exploring the visual.

A screenshot of a graph

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

Storytelling in PowerPoint – New reset behavior

When integrating a report into your presentation, it is important to ensure that it remains stable and unaltered. The add-in refreshes data from Power BI without modifying the report definition.

However, since Power BI reports are dynamic, sometimes you may want PowerPoint to get the latest changes done in the report in Power BI service. Previously, you had to remove and re-embed the report to achieve this. Now, with the improved ‘Reset’ command, you can choose either to reset the add-in to its original state as initially added to the presentation or to reset and update it with the current view from Power BI.

A screenshot of a computer program

Description automatically generated

Storytelling in PowerPoint – Supporting page up & down

You can now use the Page Down/Up keys on your keyboard to quickly navigate between slides when using the Power BI add-in. This is especially useful when the add-in captures the entire slide, and you want to advance the slide rather than trigger a Power BI event.

Save to OneDrive and SharePoint: updated file picker (Preview)

Updates have been made to the Power BI file picker to simplify navigation and file-saving processes. We have considered your feedback regarding the current file picker and have made significant improvements in this update to ensure the Power BI file picker aligns more closely with the Office experiences you are familiar with.

New updates include:

  • Improved experience when opening and saving files in One Drive and SharePoint.
  • Easy access to reports in OneDrive and SharePoint.
  • Navigating between folders in various workspaces.
  • Adding new folders to existing workspaces.
  • Pinning folders and files in the file picker.

We acknowledge that these updates may disrupt your workflow; therefore, they are not enabled by default. To access the updates, open the desktop app, go to Options and settings (under the File Menu) > Options > Preview features, select the checkbox to enable ‘Show the new file saving and open experience’ then select ‘OK’ to accept the setting.

By early next year, these settings will be on by default and will no longer require an opt-in.

A screenshot of a computer

AI-generated content may be incorrect.

We hope you enjoy these new updates, and we’d love to hear any feedback you may have via our feedback forum. When submitting feedback, be sure to include ‘OneDrive and SharePoint’ and/or ‘Updated File Picker’ in the title.

For more information, please check out our documentation on the Power BI Desktop and OneDrive + SharePoint integrations.

Enhancement to Text slicer (preview)

Following the November 2024 Text slicer release, this update enhances functionality and user experience by allowing multiple text selections.

The Text slicer is currently in preview. To enable the Text slicer, go to Options and settings > Options > Preview features > Text slicer visual to make sure it is selected, then restart Power BI.

This month’s enhancement adds a new Slicer settings control with an on/off toggle allowing the slicer to Accept multiple values. All other existing formatting options for the Input text, Apply button, and Input text box from our November update remain the same in the Format pane.

After creating a Text slicer visual and adding a text field from the data model, users can filter the dataset based on user input. Simply select the slicer input box, type your text, and apply the filter either by selecting the apply icon, pressing enter, or selecting outside the visual. The slicer immediately filters and displays the results, and you can repeat these steps to add more text selections.

When the Accept multiple values option is enabled, additional text can be added to the slicer by repeating these steps, thereby allowing multiple selections for filtering the dataset. Keep in mind that switching the toggle on or off will clear any previous text selections.

A screenshot of a computer

AI-generated content may be incorrect.

Adding filtering with multiple values brings more control to data slicing, and we encourage users to explore this new feature and provide feedback. Future enhancements are still planned as we continue to improve Power BI’s visualization capabilities with the Text slicer.

The addition of filtering with multiple values offers enhanced control over data slicing, and users are invited to explore this feature and provide feedback. Further improvements are planned for Power BI’s visualization capabilities with the Text slicer.

Share your comments and suggestions in the comments section below and stay connected with us through our dedicated Core Visuals LinkedIn blog where we announce new features, updates, and engage with our community.

Learn about our new Core Visuals Vision Board, where customers can now explore, vote, and comment on the Epic Ideas that will shape the future of Power BI Core Visuals. The Power BI community can instantly see what features are already completed, what is currently in development, and the new features and enhancements that are upcoming.

Enhancements to Treemap visual

This month’s update includes significant enhancements to the Treemap visual, with three new tiling methods that improve layout options, plus new spacing controls to enhance the visual’s appearance and usability. These features offer richer control and customization, resulting in more precise and aesthetically pleasing treemaps in Power BI.

Treemap visuals are powerful tools for data visualization that allow users to represent hierarchical data through nested rectangles. Each branch of the hierarchy is represented by a rectangle, which is then tiled with smaller rectangles representing sub-branches. This structure allows for quick comparison of different category proportions.

To generate two-level Treemap visuals, ensure that both the Category and Details fields are enabled. This allows you to visualize the hierarchical relationships between various categories and their subcategories in a clear and organized manner.

Three new Tiling methods:

Squarified: This method uses a squarified treemap algorithm to prevent elongated rectangles, creating a balanced layout. It arranges rectangles so their aspect ratios are close to squares, making size comparisons potentially easier.

A screenshot of a computer

AI-generated content may be incorrect.

Binary: This method continuously divides the chart area into two sections while incrementally adding new rectangles/nodes creating a balanced and visually appealing treemap. Each hierarchy level further splits the space, resulting in an organized treemap that adapts to the dataset’s structure. It may produce different visual characteristics compared to squarified algorithm depending on the dataset.

A screenshot of a computer

AI-generated content may be incorrect.

Alternating (Columns, Rows): The Alternating method clearly distinguishes categories by first splitting them by columns and then within each column by rows. This method effectively organizes datasets with numerous hierarchical levels.

A screenshot of a computer

AI-generated content may be incorrect.

This month’s update also introduces new spacing options to enhance the readability and appearance of the Treemap visual:

  • Space between all nodes: This setting introduces gaps between adjacent nodes at all hierarchy levels, reducing clutter and improving clarity.
  • Space between groups: By adding extra space around each node group, this option visually separates different hierarchical groups, which helps to visually distinguish categories within the hierarchy.

A screenshot of a computer

AI-generated content may be incorrect.

This update to our Treemap visual has brought improvements that reflect the commitment of the Core Visuals team to delivering the tools and features most requested by our users.

Your feedback helps us refine and expand the capabilities of core visuals. Test these Treemap enhancements and share your thoughts in the comments section below or visit our Core Visuals LinkedIn blog, to leave comments, and find up-to-date news, developments, and announcements.

Learn about our new Core Visuals Vision Board, where you users can vote on upcoming features and see what is in the pipeline. Together, we can continue to innovate and improve the tools that help our community to visualize data with Power BI.

Modeling

Semantic model version history (Preview)

Announcing the public preview of semantic model version history coming this month. This feature aims to empower self-service users by providing confidence to recover from critical mistakes when editing semantic models on the web. In this preview, versions will be automatically captured in an Office-like history pane for your web-edited Premium semantic models. You can easily select and restore any of these previous versions of your semantic model.

A screenshot of a computer

Description automatically generated

Additionally, you have the option to manually save versions to the version history for your semantic model.

A screenshot of a computer

Description automatically generated

Stay tuned as we continue to roll out updates to this experience, including future support for semantic models in Pro workspaces. We highly value your feedback, so please share your thoughts using the feedback forum.

For more details on this feature, including limitations, please refer to the documentation.

Edit your data model in the Power BI Service – updates (Preview)

The following improvements to the data model editing in the Service preview will be introduced this month:

On by default preview for Premium workspaces

With the release of semantic model version history, we will start enabling the workspace-level preview feature for editing data models in the service.  The users can edit data models workspace setting will be turned on by default for Premium workspaces. If you prefer, you can still disable the workspace preview for your workspace, but we recommend keeping it enabled! Power BI administrators will still have the ability to enable or disable data model editing in the service for the entire organization or specific security groups through the admin portal.

Viewing mode

Now, when you open your semantic models on the web, it will default to Viewing mode. This allows you to easily view the model in a safe environment, preventing any accidental edits. When you’re ready to make changes, simply toggle to Editing mode to make your modifications directly on the web.

A screenshot of a computer

Description automatically generated

For more details on the subject, reference the documentation. Please continue to submit your feedback directly in the comments of this blog post or in the feedback forum.

Live edit of semantic models in Direct Lake mode with Power BI Desktop – updates (Preview)

On by default preview 

Live editing semantic models in Direct Lake mode with Power BI Desktop is now enabled by default, allowing you to use this feature immediately without needing to turn on the preview feature. If you prefer, you can still disable this feature by turning off the live edit of Power BI semantic models in Direct Lake mode preview in Options and Settings > Options > Preview features.

More details on the feature, including requirements, considerations, and limitations can be found in the documentation. We highly value your feedback on this feature and encourage you to share it through our feedback form or the Power BI Community.

TMDL scripting experience (Preview)

TMDL view is a new view in Power BI Desktop that lets you script, modify, and apply changes to the semantic model being edited in Desktop with a modern code editor using Tabular Model Definition Language (TMDL), improving development efficiency, and providing complete visibility over the semantic model metadata. TMDL view offers an alternative experience to semantic modeling using code instead of a graphical user interface like Model view.

  • Enhance development efficiency with a rich code editor that includes search-and-replace, keyboard shortcuts, multi-line edits, and more.
  • Increase Reusability by easily script, share and reuse TMDL scripts among semantic model developers. For example, use a centralized SharePoint site to easily share reusable semantic model objects such as calendar tables or time intelligence calculation groups.
  • Get more control and transparency, showing all semantic model objects and properties, and allowing changes to items not available in Desktop GUI, such as IsAvailableInMDX or DetailRowsDefinition.

Script any semantic model object such as table, measure, column or perspective by selecting the objects from Data pane and dragging them into the code editor:

A screenshot of a computer

Description automatically generated

TMDL view will script the selected objects as a TMDL script and just like TMDL in VS Code you get an enriched code experience with features such as semantic highlighting, error diagnostics and autocomplete.

A screenshot of a computer

Description automatically generated

You may change any valid property or object within the semantic model. For instance, the example below demonstrates how to modify the displayFolder property and detail rows definition of multiple measures:

A screenshot of a computer

Description automatically generated

When ready you can hit the Apply button to execute the TMDL script against the semantic model to get your changes applied:

A screenshot of a computer

Description automatically generated

When successful, an instant notification will be displayed, and your modeling change will be applied to the semantic model.

A screenshot of a computer

Description automatically generated

In the event of a failure, your modeling changes will not be applied to the semantic model, and you can view more information about the error by selecting on show details, which expands the Output pane with the error details.

A screenshot of a computer

Description automatically generated

Get started today by turning on this public preview feature, go to File > Options and settings > Options > Preview features and check the box next to TMDL View.

To learn more about TMDL View refer to our documentation.

Data connectivity

New Snowflake connector implementation (Preview)

We continue to enhance the integration with Snowflake. This month, we are introducing a new implementation for Snowflake connector, currently available in preview.

To access this feature, in Power BI Desktop, navigate to Options and settings (under the File Menu) > Options > Preview features, select the checkbox to enable the ‘Use new Snowflake connector implementation’ option. Once the option is on, all the newly created connections will automatically use the new connector implementation.

Your existing connections remain unchanged. You can also test the new feature by editing the queries. Learn more about the Snowflake connector from this documentation article.

If you’re using On-prem Data Gateway to refresh your semantic model, make sure you have the latest version to use this feature.

We highly value your feedback on this feature and encourage you to share the feedback with us.

Visualizations

Drill Down Scatter PRO by ZoomCharts: The All-in-One Scatter Visual

The latest ZoomCharts visual, Drill Down Scatter PRO, is now available on AppSource! Just like all ZoomCharts visuals, Scatter PRO combines powerful data visualization features with an intuitive and user-friendly user experience. It is designed for fully interactive Power BI reports that deliver quick insights and foster a decision-centric culture.

Scatter PRO makes data exploration seamless and enjoyable with user interactions like panning, zoom-in, and rectangular or lasso selection. You can also create a multi-level hierarchy, which will allow users to drill down by simply selecting on a data point marker. You can learn more in our blog post, but here are the main features of Scatter PRO:

  • Drill Down: Create a multi-level category hierarchy and drill down with just a select.
  • Customization: Configure marker colors, shapes, outlines, labels, threshold lines/areas, X & Y axes, and more.
  • Data-Driven Formatting: Apply marker colors, shapes, and even images directly from data.
  • Area Shading: Highlight areas that need attention with up to 8 shapes at custom coordinates.
  • Dynamic Regression Line: Show a linear or polynomial regression line. It will automatically recalculate upon any changes in the chart.

Get on AppSource

A screenshot of a computer screen

Description automatically generated

A screenshot of a computer screen

Description automatically generated

Lollipop Chart by Powerviz

The Powerviz Lollipop chart is a variation of bar chart that uses lines and dots to represent data points. It is perfect for highlighting specific trends to help stakeholders make informed decisions.

Key Features:

  • Chart Options: Switch easily between vertical/horizontal chart.
  • Marker Style: Choose from Shapes, Charts, Icons, Images, or Upload custom image.
  • Small Multiples: Split your visual into multiple smaller visuals.
  • Error Bars: Add error bars to show data variability, improving analysis accuracy.
  • Race Chart: Enhance the chart by adding animations to show data changes over time.
  • Cut/Clip Axis: Trim/Adjust the axis to accommodate the outliers.
  • Dynamic Deviation: Analyze the deviation between two bars in a glance.
  • Preview Slider: Easily explore various sections of a chart in large datasets using a slider.
  • Conditional Formatting: Easily find outliers by using rules for measures or categories based on rules.

Other features included are Templates, Import/Export Themes, Data Colors, Ranking and more.

Business Use Cases:

Sales Analysis, Financial Reporting, Market Research.

A screenshot of a graph

AI-generated content may be incorrect.

A screenshot of a graph

AI-generated content may be incorrect.

Other

Now in Power BI Desktop – OneLake catalog

The OneLake catalog is now part of the Power BI Desktop experience, providing a consistent and seamless way to discover and explore data. This update ensures alignment with the broader Fabric ecosystem, offering users a unified and familiar experience across tools.

A screenshot of a computer

AI-generated content may be incorrect.

Platform

Folder support in Git

Timeline update – Folder support is planned to roll out to all customers by mid-April. Thank you for your patience!

This update ensures that the folder structure in your Fabric workspace is seamlessly mirrored in your connected Git branch, providing an organized and consistent experience across both platforms.

A screenshot of a computer

Description automatically generated

New features:

  • Folder Structure Mirroring: The entire folder hierarchy in Fabric is reflected in Git and vice versa, enabling a more intuitive and organized collaboration process.
  • Nested Folders Are Synced: Fabric items located within nested folders will now be included in the sync, and their folder structure will be preserved.
  • Item Updates as Commits: Changes to an item’s folder (e.g., moving an item to another folder or reorganizing folders) will now appear as updates or commits in Fabric.

Subfolder support is enabled by default as soon as the feature is live. This means any folder differences between Fabric and Git will automatically show up as updates or commits.

Handling Folder Changes Safely

If changes to the connected branch cannot be made directly due to branch policy or permissions, we recommend using the ‘Checkout Branch’ option.

Guidelines for managing this:

  1. Checkout a New Branch: Use the checkout branch feature to create a branch with the updated state of your Fabric workspace.
  2. Commit Folder Changes: Any workspace folder changes can then be committed to this new branch.
  3. Merge Changes: Use your regular pull request (PR) and merge processes to integrate these updates back into the original branch.

OneLake

OneLake Catalog – Semantic model table & column description

We are expanding the details view of Semantic Models, to also include table and column descriptions which were set in the data model editor in the service or in Power BI Desktop. The goal is to provide consumers with multiple trust signals regarding an artifact, thereby enabling them to make swift and well-informed decisions.

This improvement provides additional name and type details for tables and columns in semantic models, helping data consumers identify relevant tables more efficiently and encouraging data producers to document organizational knowledge. We are planning to expand this ability to other data items in the future.

Filtering workspaces in OneLake Catalog

You now have a dedicated filter for workspace names, which will allow you to quickly locate the required workspace. This makes it easier for users with access to multiple workspaces, and makes finding the relevant workspace difficult if it isn’t featured at the top of the list.

Data Engineering

Python notebook (Preview)

Announcing the preview of the highly anticipated Python Notebook! This new feature is designed to enhance the experience of BI developers and data scientists working with smaller datasets using Python as their primary language.

Key features:

  • Native Python support: Enjoy the full power of Python with native features and libraries right out of the box, like ipywidget, magic commands.
  • Version flexibility: Easily switch between different Python versions (initially supporting Python 3.11 and 3.10).
  • Optimized resource utilization: Benefit from better resource utilization with a smaller 2vCore/16G memory compute, real-time resource utilization monitor is available.
  • Lakehouse & resources natively available: Leveraging the Fabric Lakehouse capabilities seamlessly, with built-in Resource folder to store your modules, libs and files.
  • Mix programming with T-SQL: You can interact with data warehouses and SQL endpoints on Python notebook, with the built-in notebookutils connector.
  • Superior Python intellisense: Powerful Pylance are natively integrated to provide smoother coding experience.
  • Popular libraries are pre-installed: Using duckdb, polars and other popular 3rd party libraries on Python notebook conveniently. Fabric utilities like Semantic Link and NotebookUtils are also natively supported.
  • Seamless integration with Fabric ecosystem: All the advantages of Fabric notebook like sharing, CI/CD, schedule run, data pipeline integration, OrgAPP integration, are available for Python experience.

Getting Started:

  • Access the Notebook: You can access the Python Notebook from the Notebook language dropdown menu.

  • Comprehensive Guide: A detailed guide is available to help you get started. Please refer to the public document to find more details.

Your feedback is crucial in shaping the future of our product. We look forward to your active participation and valuable insights. Thank you for being a part of this exciting journey with us!

Notebook live versioning

Announcing the launch of the Fabric notebook version history feature. This new feature is designed to significantly improve your experience in developing and managing notebooks by providing robust built-in version control capabilities.

Highlights: 

  • Automatic checkpoints: These checkpoints are created automatically every 5 minutes, ensuring that your work is consistently saved and versioned.
  • Manual checkpoints: You can also manually create checkpoints to record your development milestones, providing flexibility in how you manage your notebook versions.
  • Track history of changes: Users can now view a list of previous notebook versions, see what changes were made, contributed by whom, and when.
  • Compare different versions: Easily compare different versions of a notebook through a diff view to understanding the evolution of your work.
  • Restore previous versions: If you make a mistake or want to explore a different approach, you can restore previous versions of your notebook or save a new copy of it.

NotebookUtils session management utilities

Introducing a new utility in NotebookUtils- session management utilities, including a list of APIs that can help you manage your session and interpreter status.

  • notebookutils.session.stop(): Support stopping the interactive session via code, it’s available for Scala and PySpark.
  • notebookutils.session.restartPython(): Support restarting the Python interpreter in PySpark notebook.

For more details, please refer to the documentation.

Native Execution Engine on Runtime 1.3: simplified enablement and transition from Runtime 1.2

Introducing a new update that simplifies enabling the Native Execution Engine. Now, activating it is as easy as toggling a switch! You’ll find the new toggle button in the Acceleration tab within your environment settings.

If you were previously using the Native Execution Engine, please navigate to the Acceleration tab and re-enable it using the new toggle. This updated UI control now takes precedence over any previous configurations in Spark settings, meaning prior setups will remain inactive until re-enabled with the toggle.

Additionally, the Native Execution Engine now fully supports our latest runtime version, Runtime 1.3 (Apache Spark 3.5, Delta Lake 3.2). As a result, support for Native Execution Engine on Runtime 1.2 is ending. We recommend upgrading to Runtime 1.3 to maintain support, as native acceleration will soon be unavailable on Runtime 1.2.

Legacy timestamp support in Native Execution Engine on Runtime 1.3

The latest Native Execution Engine on Fabric Runtime 1.3 introduces legacy timestamp handling, ensuring compatibility across Spark versions. This feature addresses timestamp issues caused by Spark 3.0’s shift to the Java 8 date/time API (Proleptic Gregorian calendar) from the previous hybrid Julian-Gregorian calendar.

With the configuration spark.gluten.legacy.timestamp.rebase.enabled, the Native Execution Engine auto-adjusts for calendar differences in Parquet files and Delta Tables, handling dates seamlessly across Spark versions. Dates post-1970 are unaffected, ensuring consistency without extra steps.

To activate this feature, add the following to your Spark session:

SET spark.gluten.legacy.timestamp.rebase.enabled = true;

Notebook and Spark Job definition execution with service principal

The service principal support for Fabric API was announced back in September. Today, we unblock another key scenario, enabling run the Notebook/Spark Job Definition execution under SP (service principal) for Data Engineering experience.

Using the Fabric Job Scheduler API, users can trigger the execution of either a Notebook or Spark Job Definition (SJD) and monitor its execution status. By utilizing the same API with a service principal’s access token, the Spark Job associated with the notebook/SJD will run within the security context of that service principal.

This update enhances SP support for the Data Engineering experience, expanding its capabilities beyond current CRUD operations to include comprehensive job execution coverage. To make sure the SP does have the privilege to run the job, you need to add that SP as Admin/Contributor/Member into the workspace which hosts the Notebook/SJD.

If the notebook/SJD has some code related with Data Science scenario such as Model/Experiment, the SP triggered execution could fail, this is something we are working on to unblock them later.

Lineage Enhancement to Spark Notebook

A new improvement to the lineage related to Spark Notebooks has been introduced to enhance data exploration effectiveness. You can now view all Lakehouses connected to your notebook, including pinned and additional Lakehouses.

This update helps you:

  • Perform Impact Analysis: Easily assess how changes affect data workflows by identifying which Lakehouses are being used.
  • Document Data Pathways: Streamline collaboration and audits with clear visibility into data relationships.

Experience smarter data management today with this lineage enhancement to your Spark Notebooks!

Data Warehouse

COPY INTO column count check

The COPY statement offers flexible, high-throughput data ingestion from an external Azure storage account into Fabric Data Warehouse tables. When there is a column count mismatch between rows in source files and the target table, COPY INTO has the following behavior:

  • If a row within the source files has less columns than the target table, COPY INTO inserts columns with missing values as NULL. If there’s no corresponding value for a non-nullable column in the source data, then COPY INTO fails.
  • If a row within the source files has more columns than the target table, any excess columns from source files are ignored in the target table.

We’re introducing a new option for COPY INTO that allows you to control the behavior of your data ingestion jobs by checking if the count of columns in the source data matches the count of columns on your target table.

The following syntax should be used for the column count check option:

COPY INTO FactSale FROM ‘<external_location>’

WITH (

FILE_TYPE = ‘CSV’,

[ , MATCH_COLUMN_COUNT = { ‘ON’ | ‘OFF’ } ]

)

MATCH_COLUMN_COUNT checks the column count on each row of each source file for a match against the target table specified in the COPY INTO statement. This option is available only for CSV file type sources now, with support for Parquet coming soon. The default behavior of COPY INTO remains unchanged and is equivalent to using MATCH_COLUMN_COUNT = ‘OFF’.

Learn more about COPY INTO and this new option, refer to the documentation.

Enhancing COPY INTO operations with Granular Permissions in Data Warehouse

One of the challenges our customers shared is that executing COPY INTO command requires users to have at least the Contributor role at the workspace level, granting broad permissions that may exceed what is necessary for specific tasks.

We are excited to announce that now a user with minimum ‘read’ permissions on the control plane will be allowed to execute write operations at the Data Warehouse level.

The benefits of this change include reducing the need for broad workspace roles, and it also works seamlessly even when storage account is protected behind a firewall.

Learn more about COPY INTO in our documentation and to know more about this new option, check out this blog post COPY INTO support for secure storage with granular permissions.

Introducing default schema changes in Data Warehouse

We are happy to announce the ability to change the default schema in Fabric Data Warehouse. With this improvement, which has been highly requested by our customers, we strive to make database management and enhanced security more straightforward. This is done using the ALTER USER statement, ensuring that every user has a predefined schema context when they connect to the database.

ALTER USER [username] WITH DEFAULT_SCHEMA = [schema_name];

By allowing administrators to assign default schemas to users, we ensure that users operate within their designated schemas, reducing the risk of unauthorized access and simplifying permissions management.

For more information, check our documentation:

Enhanced performance metrics in Query Insights

New features have been introduced to provide deeper insights into query performance. With the introduction of Data Scanned Analysis, you can now determine if large data scans are contributing to slower query execution. This feature allows you to compare similar queries, pinpoint fluctuations caused by changes in data scanned, and even identify when cache was utilized.

Additionally, we’ve introduced allocated CPU time as a key performance metric. This enables you to understand the resources consumed by your queries and workloads. High CPU time often correlates with higher costs, making it easier to identify and address resource-intensive queries. These enhancements empower you to optimize performance and manage costs effectively.

These columns are available in queryinsights.exec_requests_history:

Column name Data type Description
allocated_cpu_time_ms Bigint Shows the total time of CPU(s) that was allocated for a query’s execution.
data_scanned_remote_storage_mb Bigint Shows how much data was scanned/read from remote storage (One Lake).
data_scanned_memory_mb Bigint Shows how much data was scanned from local memory. Data scanned from disk and memory together indicates how much data was read from cache.
data_scanned_disk_mb Bigint Shows how much data was scanned/read from local disk. Data scanned from disk and memory together indicates how much data was read from cache.

For more information, check out our documentation:

Previewing estimated Query Plan available via SHOWPLAN_XML

The Preview for SHOWPLAN_XML in Microsoft Fabric Data Warehouse is now available. This capability allows users to generate and view the estimated query execution plan in XML format, a tool for analyzing and optimizing SQL queries. Whether you’re troubleshooting performance bottlenecks or refining query strategies during development, SHOWPLAN_XML offers a granular, detailed view of how the database engine plans to execute your queries. By providing insights into operations like joins, data movements, etc. it helps pinpoint inefficiencies and identify opportunities to enhance performance.

How can you use SHOWPLAN_XML?

  1. Enabling SHOWPLAN_XML – To enable SHOWPLAN_XML, execute the following SQL command: SET SHOWPLAN_XML ON;This command instructs Fabric DW to return execution plans in XML format for all subsequent queries.
  2. Running Queries – After enabling SHOWPLAN_XML, run the queries that you wish to analyze.The execution plan for this query will be returned in XML format.
  3. Capturing the Output – Capture the SHOWPLAN_XML output by saving the result set to a file or copying it to an XML viewer. Ensure that the entire XML content is preserved for accurate analysis.
  4. When running SHOWPLAN_XML in Fabric UI, copy results and save them as a .sqlplan file. Open this file in SSMS to view the graphical plan.

A screenshot of a computer

Description automatically generated

  1. If you run in SSMS, use the SET SHOWPLAN_XML syntax as explained above. You can also use the plan Display Estimated Plan button to see the graph.

A screenshot of a computer

Description automatically generated

  1. Turn OFF SHOWPLAN_XML – Run SET SHOWPLAN_XML OFF to receive results instead of the execution plan when you run queries.

For more details, check out our documentation:

Query Hints in Fabric Data Warehouse

Along with SHOWPLAN_XML, we are announcing support for some query hints. Query hints in Fabric SQL are optional keywords that can be added to SQL statements to provide additional information or instructions to the query optimizer. These hints can improve the performance, scalability, or consistency of queries by overriding the default behavior of the query optimizer.

To use a query hint, the OPTION clause is added at the end of the query, followed by the name of the query hint and its optional parameters in parentheses. For instance, if you want to instruct the query optimizer to use a hash-based algorithm for the GROUP BY operation, you can use the HASH GROUP query hint.

SELECT band_id, SUM(ticket_cost)

FROM gigs

GROUP BY band_id

OPTION (HASH GROUP)

Fabric SQL supports a variety of query hints, including HASH GROUP, ORDER GROUP, MERGE UNION, HASH UNION, CONCAT UNION, FORCE ORDER, LOOP JOIN, HASH JOIN and REPLICATE. Each of these hints serves a specific purpose, such as improving the performance of GROUP BY operations, UNION operations, or join operations. However, query hints should be used with caution and tested thoroughly, as they can have negative effects on the performance, scalability, or consistency of queries if used incorrectly or unnecessarily. It is essential to monitor and evaluate the impact of query hints on queries and adjust them as needed.

For more details, refer to our documentation:

  1. Join hints (Transact-SQL)
  2. Query hints (Transact-SQL)

Simplifying search & introducing Filter in Object Explorer

The search and filter features in Fabric Data Warehouse empower users to efficiently navigate and manage their data.

The new search feature is designed for ease of discovery and intuitive use, allowing users to locate objects in the object explorer by entering keywords. The search function quickly highlights matching objects and highlights the results within the object explorer for the user.

The filtering feature in Object Explorer is an essential tool for managing large data warehouses and simplifying navigation within your warehouse environment. When dealing with numerous objects, such as schemas, tables, or stored procedures, finding specific items can be challenging. The filtering capability allows you to streamline this process effectively.

The filtering options allow for precise object selection based on various criteria such as object type, created date, last updated, enabling you to focus on the most relevant information for your exploration in object explorer.

By leveraging this combination, users can significantly reduce the time spent searching or filtering data, allowing for more focus on troubleshooting, generating scripts for development and documenting objects in Object Explorer.

Open from SSMS & VS Code

Developers now can easily access their Fabric Warehouse through their preferred client tools. With a renewed focus on integrating with widely used developer tools, this enhancement prioritizes flexibility and convenience, enabling seamless connections with SQL Server Management Studio and Visual Studio Code.

This means you can dive right into your data analysis and management without any hassle, using the tools you know and love.

Developers have the option to open Fabric Warehouse in SQL Server Management Studio (SSMS) or Visual Studio Code, either from a workspace or within the warehouse itself.

You can open the warehouse in VS Code or by downloading VS Code.

Note that Visual Studio code will install ms-sql for you and pre-populate Server and Database name in the connection to get started.

You can also open or download SSMS to begin using your preferred tool.

Git Status Bar to Fabric Warehouse Artifact

The Git artifact status bar offers a comparable experience to the status bar in the workspace.
When accessing the DW item page, you can view the details of the connection between the workspace and the Git repository, such as:

  • The name of the branch to which the workspace is connected
  • The time of the last sync event between the workspace and the repository
  • A hyperlink to the most recent commit on the branch.

The Git status bar is useful for following scenarios

  • It offers a user-friendly interface for developers, like that of Visual Studio Code and other applications that display the status of the connection to the remote repository at the artifact level.
  • More features, such as the ability to commit directly from the artifact page, will be added soon.

Tooltip support for built-in functions

Fabric Web editor provides robust tooltip support for built-in functions, enhancing the development experience by offering quick access to function details. When you hover over a built-in function in your query, web editor displays an interactive tooltip. This tooltip includes the function’s name, possible parameters, and a brief description. For example, if you hover over the COUNT() function, the tooltip will show its syntax and a short description of what the function does.

This feature is particularly useful for quickly referencing function parameters and understanding their usage without leaving the query window. It helps streamline the coding process and reduces the need to manually look up function details.

Stay updated with IntelliSense

Our goal is to facilitate Fabric developers in writing queries by enabling new T-SQL statements in Fabric Warehouse. Recently, we have integrated IntelliSense support for the following newly released features.

  1. FOR JSON – Announcing improved JSON support in Fabric DW
  2. COPY INTO – Column count check: COPY INTO (Transact-SQL) – Azure Synapse Analytics and Microsoft Fabric

JSON aggregates (Preview)

As part of our ongoing enhancements to JSON functionalities in Fabric DW, we are excited to announce the preview of two new JSON aggregate functions:

These aggregate functions simplify the process of concatenating columns within a GROUP BY operation and formatting them as JSON text.

A screenshot of a computer

Description automatically generated

Previously, achieving this required the use of the generic STRING_AGG() aggregate combined with manually concatenated and escaped column values to produce a valid JSON string. Now, with JSON_OBJECTAGG and JSON_ARRAYAGG, this process is streamlined and more efficient.

These functions are already in public preview in Azure SQL Database and Azure SQL Managed Instance, and Fabric DW is now joining this preview. They will become generally available across all SQL flavors simultaneously.

Spatial analytic functions

Spatial analytics functions are now fully supported in Fabric DW and SQL endpoints. Spatial functions enable you to perform complex calculations on the geographical and geometrical shapes, such as determining the distance between points, checking whether a point is within a polygon, or whether the shapes intersect. Previously, these functions were not fully supported. Comprehensive support commenced in December 2024.

Fabric DW supports both geography objects and functions for simple 2D geometries, as well as geography objects for more realistic shapes represented on the Earth’s surface. This includes all spatial reference systems available in SQL Server and Azure SQL databases.

While Fabric DW does not support storing spatial types directly, you can represent your spatial objects as float columns representing the (latitude, longitude) pairs or store complex shapes in VARBINARY columns in Well-Known Binary (WKB) format. You can then use spatial functions to convert them to spatial objects and apply spatial operations.

For example, the following query finds the number of trips starting near the Empire State Building in New York (40.748817, -73.985428):

A screenshot of a computer

Description automatically generated

Be aware that the spatial functions are among the most complex calculations you can perform in your DW, which might impact performance. To improve the query performance, ensure you are physically storing a geoindex column that can approximately determine the location of an object or shape (e.g., using a bounding box, quadkey, H3 index, geohash, or something similar) and prefilter data based on this index column instead of applying a direct spatial filter. Since spatial filters and joins are the most resource-consuming operations, they might impact performance if rely only on them without additional indexing columns.

Once you reduce your data set using geo index columns, you can perform complex spatial analytics using geography and geometry methods.

SQL analytics endpoint performance improvement

We released an update to the SQL analytics endpoint that improves query performance and data freshness. Previously you might have encountered slow SELECT statements and stale data in your tables. With this update, metadata changes are synced more efficiently to the SQL analytics endpoint, resulting in faster SELECT query execution and data updates. This improvement ensures a more responsive and reliable experience for our users.

We’ll continue to enhance the SQL analytics endpoint based on your feedback, so make sure to comment or vote on Ideas.

Databases

Tenant Level Private Link (Preview)

We are excited to announce the preview of Tenant Level Private Link for SQL database in Fabric! This new feature enhances the security and privacy of your data by allowing you to connect to your SQL database through a private endpoint within your virtual network. With Private Link, you can now ensure that your data traffic remains within the Microsoft network, reducing exposure to the public internet and minimizing potential security risks. This integration simplifies network architecture and provides seamless and secure connection experience for your SQL database in Fabric.

To enable Private Link in Fabric, start by creating a private endpoint within your virtual network (VNet) to securely connect to the Fabric service using a private IP address as outlined here Set up and use private links for secure access to Fabric

Next, enable the Private Link toggle in the Fabric Admin Portal for your tenant to allow VNet requests to access Fabric resources. Additionally, you can choose to completely disallow any connections other than via Private Link by enabling the Block Public Internet Access toggle.

For more information check out SQL database Overview (Preview).

A screenshot of a computer

AI-generated content may be incorrect.

Copilot for SQL database in Fabric Region Availability

Copilot for SQL database is now available in all supported regions listed in Fabric region availability!

Today, we are offering three key features:

  • Inline Code Completion for faster, smarter query writing.
  • ‘Explain the Query’ & ‘Fix Query Error’ Quick Actions to simplify complex tasks.
  • Sidecar Q&A Chat for answers and deeper understanding.

Before your business can start using Copilot capabilities in Microsoft Fabric, please make sure to enable Copilot in the tenant settings.

For more information, refer to these resources:

Real-time Intelligence

Override late arrival tolerance in Activator

Late arrival tolerance refers to how long Activator waits for an event to arrive and be acknowledged and processed. This setting ensures that late events and events that arrive out of order have an opportunity to be included in the rule evaluation. The consideration is to tradeoff on getting more ‘accurate’ rule evaluations by waiting longer for late data points to arrive or run your rule on potentially incomplete data, so the rule is activated sooner.

The default setting is 2 min, but now you can set the late arrival tolerance to a longer or shorter period.

Please note that this setting will not be shown for rules that are built on Power BI or KQL data.

Create new Activator items

As of November 2024, Real-time Intelligence and Activator are now Generally Available (GA)! You may need to work with your admin to ensure that you have all the capabilities available to you in Activator GA.

If your tenant is using the preview version of Data Activator but does not have Fabric enabled, you will no longer be able to create new Activator items. To keep using Activator and create new Activator items, enable Fabric for your tenant.

To enable Fabric, go to the admin portal and make sure that users are allowed to create Fabric items.

tessarhurr_0-1731348926716.png

Please note that if you have delegated settings to other admins, you should also allow capacity admins to enable/disable.

RTI ALM & APIs GA

Application Lifecycle Management (ALM) and Fabric REST APIs are now available for all RTI items: Eventstream, Eventhouse, KQL Database, Realtime dashboard, Query set and Data Activator.

ALM includes both deployment pipelines and Git integration, both allow you to manage change within your workspaces. This enables multiple scenarios, from simply ‘recording’ changes in Git to have an audit trail, to deploying your development workspace, with all its dependencies to a staging and production workspace, to introducing changes via feature branch connected to feature workspace.

REST APIs enables more control over changes by allowing you to programmatically create / read / update / delete (CRUD) your artifacts.

Data Factory

Mirroring

Mirroring now supports replicating source schemas

Mirroring in Fabric now supports replicating source schemas. When data is mirrored from various types of sources, your source schema hierarchy is preserved in the mirrored database. This ensures that your data remains consistently organized across different services, allowing you to consume it using the same logic in SQL analytics endpoint, Spark Notebooks, semantic models, and other references to the data.

The existing mirrored databases remain unchanged to maintain backward compatibility and avoid affecting downstream workload. If you want to reorganize your tables with schemas, please recreate the mirrored database.

Please refer to schema support with Mirroring in Microsoft Fabric.

Delta column mapping support for Mirroring is now available

Mirroring in Fabric now supports Delta column mapping. Column mapping is a feature of Delta tables that allows users to include spaces and special characters such as ‘,;{}()\n\t=.’ in the table’s column names. With this new capability in mirroring, columns containing spaces or special characters in names can now be replicated from your source databases to the mirrored databases.

For tables with special characters in column names that are already under replication, you can update the mirrored database settings by removing and re-adding them to include those columns.

To learn more, refer to Delta column mapping support with Mirroring in Microsoft Fabric.

Mirroring now supports CI/CD (Preview)

Mirroring in Fabric now supports CI/CD capabilities, enhancing the efficiency and reliability of your development workflows. Users can integrate Git for source control and utilize ALM Deployment Pipelines, streamlining the deployment process and ensuring seamless updates to mirrored databases. To learn more about these new capabilities, refer to CI/CD for mirrored databases in Fabric (Preview).

A screenshot of a computer

AI-generated content may be incorrect.

Integrating SAP data into Open Mirroring

dab (dab – We are your company for SAP data analytics) is the first partner to announce support for Open Mirroring from our SAP ecosystem. With over 20 years of experience in SAP analytics, dab offers a variety of analytic solutions covering multiple lines of business including accounting and procurement.

Dab Nexus now integrates with Open Mirroring to synchronize data from various SAP sources including SAP S/4HANA (on-premises and Private Cloud Edition), SAP ECC, CRM, SRM, SCM and EWM.

For more information on Open Mirroring with dab Nexus, refer to: Microsoft Fabric Open Mirroring: Efficient and innovative.

To learn more about Open Mirroring, please refer to the documentation.

Copy Job

Simplify data ingestion with Copy Job: more connectors and better usability

Copy Job simplifies data ingestion, providing a seamless experience from any source to any destination. Whether you need batch or incremental copying, Copy Job provides the flexibility to meet your data needs while keeping things simple and intuitive.

Since the Public Preview launch at FabCon Europe in late September, we’ve quickly enhanced Copy Job with new features. We’re excited to announce that Copy Job now supports more connectors, including Snowflake and Azure SQL Managed Instance, for easier integration with your data sources. More connectors are coming soon!

We’re committed to making Copy Job as simple and intuitive as possible, and your feedback is key to achieving that goal. You can now easily configure the update method and schedule before creating a copy job, giving you greater control and flexibility right from the start.

Check out the details in What is Copy job (preview) in Data Factory.

Dataflow Gen2

CI/CD support for Dataflows in Fabric

We are delighted to share that CI/CD and GIT integration support for Dataflow Gen2 is now available in preview!

You can opt into enabling these capabilities when creating a new Dataflow Gen2.

New Dataflow Gen2 item experience with the option to enable Git integration, deployment pipelines and Public API scenarios

With this new set of features, you will be able to seamlessly integrate your Dataflow Gen2 artifacts with your existing CI/CD pipelines and version control of your workspace in Fabric. This integration allows for better collaboration, versioning, and automation of your deployment process across dev, test, and production environments.

Learn more about these new capabilities: Dataflow Gen2 CI/CD and GIT source control integration are now in preview!

Single-line ribbon in Power Query editor

The default experience for Dataflows Gen2 in Fabric now uses the ribbon in its single line mode. This brings consistency against other experiences that you will find within Microsoft Fabric.

The full ribbon mode will remain available. You can access the complete ribbon experience by selecting the expand button.

Conclusion

We hope that you enjoy the update! Be sure to join the conversation in the Fabric Community As always, keep voting on Ideas to help us determine what to build next. We are looking forward to hearing from you!

Liittyvät blogikirjoitukset

Microsoft Fabric January 2025 update

kesäkuuta 11, 2025 tekijä Eren Orbey

Earlier this year, we released AI functions in public preview, allowing Fabric customers to apply LLM-powered transformations to OneLake data simply and seamlessly, in a single line of code. Since then, we’ve continued iterating on AI functions in response to your feedback. Let’s explore the latest updates, which make AI functions more powerful, more cost-effective, … Continue reading “Introducing upgrades to AI functions for better performance—and lower costs”

kesäkuuta 2, 2025 tekijä Kim Manis

The Microsoft Fabric Community Conference is back for its third year—and we’re bringing everything and everybody you’ve loved at past events with us to Atlanta, Georgia. After unforgettable experiences at FabCon in Las Vegas and Stockholm, the Fabric community proved just how powerful it can be when we come together. With more than 13,000 attendees across our last three conferences, it’s clear: the Microsoft Fabric community is here to drive the future of data!    And yes, we’re pleased to announce; it’s happening again! Mark your calendars … Continue reading “Microsoft Fabric Community Conference Comes to Atlanta!”