Microsoft Fabric Updates Blog

Microsoft Fabric March 2024 Update

Welcome to the March 2024 update.

We have a lot of great features this month including OneLake File Explorer, Autotune Query Tuning, Test Framework for Power Query SDK in VS Code, and many more!

Earn a free Microsoft Fabric certification exam! 

We are thrilled to announce the general availability of Exam DP-600, which leads to the Microsoft Certified: Fabric Analytics Engineer Associate certification.  

Microsoft Fabric’s common analytics platform is built on the instantly familiar Power BI experience, making your transition to Fabric Analytics Engineer easier. With Fabric, you can build on your prior knowledge – whether that is Power BI, SQL, or Python – and master how to enrich data for analytics in the era of AI. 

To help you learn quickly and get certified, we created the Fabric Career Hub. We have curated the best free on-demand and live training, exam crams, practice tests and more. 

And because the best way to learn is live, we will have free live learning sessions led by the best Microsoft Fabric experts from Apr 16 to May 8, in English and Spanish. Register now at the Learn Together page.

 Also, become eligible for a free certification exam by completing the Fabric AI Skills Challenge. But hurry! The challenge only runs from March 19 – April 19 and free certs are first-come, first-served. (limit one per participant, terms and conditions apply). 

Contents

Reporting

Visual calculations update (preview)

You can now add and edit visual calculations on the service. You can add a visual calculation by selecting New calculation from the context menu on a visual after you publish a report to the service.

A screenshot of a computer

Description automatically generated

Also, after you publish a report that has visual calculations in it, you can access the visual calculations edit mode by selecting a visual calculation and choosing Edit calculation.

A screenshot of a computer

Description automatically generated

To learn more about visual calculations, read our announcement blog and our documentation.

Blogs: https://powerbi.microsoft.com/blog/visual-calculations-preview/

Docs: https://aka.ms/visual-calculations-docs

On-Object Interaction Updates

Why not both? To balance the needs of our existing users who prefer to build visuals quickly in the pane, with the needs of our new users that need guidance when picking a visual type or appropriate field wells, you no longer must choose one or the other path, now there’s both!

This month, we streamlined the build pane and moved the visual suggestions feature to be inside the on-object build button only. Need help building your visual? Use the on-object “suggest a visual” experience. Already know your way around, use the build pane as you already do today.

A screenshot of a computer

Description automatically generated

Gauge visual is now supported! The gauge visual now supports the new on-object formatting sub selections. Simply double click on your gauge visual to enter format mode, then right-click on which part of the visual you’d like to format using the mini-toolbar.

A screenshot of a graph

Description automatically generated

A screenshot of a computer

Description automatically generated
The pane switcher has been renamed to Pane manager and spruced up this month. Based on your feedback, we’ve updated the order of the pane listings and added the settings that pertain to the Pane manager directly in this menu. Let us know what you think!

A screenshot of a computer

Description automatically generated

Mobile layout auto-create (Preview)

You know that mobile optimized report layouts are the best way to view data in the Power BI mobile apps. But you also know that it requires extra work to create that layout. Well, not anymore…

As of this monthly update, you can generate a mobile-optimized layout with the click of a button! This long-awaited feature allows you to easily create mobile-optimized layouts for any new or existing report page, saving you tons of time!

When you switch to the mobile layout view in Power BI Desktop, if the mobile canvas is empty, you can generate a mobile layout just by selecting the Auto-create button.

The auto-create engine understands the desktop layout of your report and builds a mobile layout that considers the position, size, type, and order of the visuals that the report contains. It places both visible and hidden visuals, so if you have bookmarks that change a visual’s visibility, they will work in the automatically created mobile layout as well.

You can edit the automatically created mobile layout, so if the result is not exactly what you expected, you can tweak it to make it perfect for your needs. Think of it as a starting point you can use to shorten the way to that beautiful, effective, mobile-optimized report you envision.

To enjoy the new mobile layout auto-create capabilities, switch on the “Auto-create mobile layout” preview feature in Power BI Desktop: File > Options and settings > Options > Preview features > Auto-create mobile layout.

A screenshot of a computer

Description automatically generated

We invite you to try out the mobile layout Auto-create feature and share your feedback with us!

Expanding Spatial Data Integration: Shapefile Support in Azure Maps Visual

After successfully integrating WKT and KML formats in February, we’re now stepping it up a notch by extending our support to include the Shapefile format. With just two clicks, you can now seamlessly overlay your spatial data onto Azure Maps’ base map. Whether through file upload or a hosted file, Azure Maps’ reference layer empowers you to effortlessly incorporate your data. Get ready to elevate your data storytelling to new heights, embracing flexibility and unlocking fresh insights with our upcoming release!

A map of australia with white text

Description automatically generated

Data bars in matrix subtotal/total conditional formatting

In this Power BI release, we are happy to upgrade the data bars for Matrix and table to apply them to values only, values and totals, or total only. This enhancement gives you better control and reduces unnecessary noise to keep your tabular visuals nice and clean.

In this Power BI release, we’re excited to introduce an upgrade to the data bars for Matrix and Table visuals. Now, you have the flexibility to apply data bars to the following options:

Values Only: Display data bars based solely on the values within your visual.

Values and Totals: Extend data bars to include both individual values and their corresponding totals.

Total Only: Show data bars exclusively for the overall total.

This enhancement provides better control over your tabular visuals, reducing unnecessary noise and ensuring cleaner presentation.

A screenshot of a computer

Description automatically generated

Data labels alignment

We’ve made significant improvements to the data labels in our charts. Now, when you use a multi-line layout with title, value, and detail labels, you have the flexibility to horizontally align them. This means you can create cleaner, more organized visualizations by ensuring that your labels are neatly positioned. To experience this enhancement, follow these steps: 1) navigate to the Data Labels section, 2) click on Layout, and finally, 3) explore the Horizontal alignment options for aligning your labels.

A screenshot of a graph

Description automatically generated

Modeling

Write DAX queries in DAX query view with Copilot (Preview)

The DAX query view with Copilot is now available in public preview! Enable the feature in the Preview section of File > Options and settings > Options, click on DAX query view, and launch the in-line Copilot by clicking the Copilot button in the ribbon or using the shortcut CTRL+I.

With Fabric Copilot, you can generate DAX queries from natural language, get explanations of DAX queries and functions, and even get help on specific DAX topics. Try it out today and see how it can boost your productivity with DAX query view!

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

A more detailed blog post will be available soon.

Enhanced row-level security editor is Enabled by Default (Preview)

We are excited to announce the enhanced row-level security editor as the default experience in Desktop! With this editor, you can quickly and easily define row-level security roles and filters without having to write any DAX! Simply choose ‘Manage roles’ from the ribbon to access the default drop-down interface for creating and editing security roles. If you prefer using DAX or need it for your filter definitions, you can switch between the default drop-down editor and a DAX editor.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated
Learn more information about this editor including limitations in our documentation. Please continue to submit your feedback directly in the comments of this blog post.

Selection Expressions for calculation groups (preview)

Calculation groups just got more powerful! This month, we are introducing the preview of selection expressions for calculation groups, which allow you to influence what happens in case the user makes multiple selections for a single calculation group or does not select at all. This provides a way to do better error handling, but also opens interesting scenarios that provide some good default behavior, for example, automatic currency conversion. Selection expressions are optionally defined on a calculation group and consist of an expression and an optional dynamic format expression.

This new capability comes with an extra benefit from potential performance improvements when evaluating complex calculation group items.

To define and manage selection expressions for calculation groups you can leverage the same tools you use today to work with calculation groups.

On a calculation group you will be able to specify the following selection expressions both consisting of the Expression itself and a FormatStringDefinition:

  • multipleOrEmptySelectionExpression. This expression has a default value of SELECTEDMEASURE() and will be returned if the user selects multiple calculation items on the same calculation group or if a conflict between the user’s selections and the filter context occurs.
  • noSelectionExpression. This expression has a default value of SELECTEDMEASURE() and will be returned if the user did not select any items on the calculation group.

Here’s an overview of the type of selection compared to the current behavior that we shipped before this preview, and the new behavior both when the expression is defined on the calculation group and when it’s not. The items in bold are where the new behavior differs from the current behavior.

Type of selection Current behavior New behavior without a defined selection expression New behavior with a selection expression defined.
Single selection Calculation group selection is applied N/A, no change to behavior N/A, no change to behavior
Multiple selection Calculation group is not filtered Calculation group is not filtered Calculation group evaluates specified multipleOrEmptySelectionExpression
Empty selection Error Calculation group is not filtered Calculation group evaluates specified multipleOrEmptySelectionExpression
No selection Calculation group is not filtered Calculation group is not filtered Calculation group evaluates specified noSelectionExpression

Let’s look at some examples.

Multiple or Empty selections

If the user makes multiple selections on the same calculation group, the current behavior is to return the same result as if the user did not make any selections. In this preview, you can specify a multiOrEmptySelectionExpression on the calculation group. If you did, then we evaluate that expression and related dynamic format string and return its result. You can, for example, use this to inform the user about what is being filtered:

EVALUATE

{

CALCULATE (

[MyMeasure],

'MyCalcGroup'[Name] = "item1" || 'MyCalcGroup'[Name] = "item2"

)

}

-- multipleOrEmptySelectionExpression on MyCalcGroup:

IF(ISFILTERED ( 'MyCalcGroup' ),\"Filters: \"& CONCATENATEX (FILTERS ( 'MyCalcGroup'[MyCalcGroup] ),'MyCalcGroup'[MyCalcGroup],\", \"))

IF (

ISFILTERED ( 'MyCalcGroup' ),

"Filters: "

& CONCATENATEX (

FILTERS ( 'MyCalcGroup'[Name] ),

'MyCalcGroup'[Name],

", "

)

)

-- Returns “Filters: item1, item2”

In case of a conflict or empty selection on a calculation group you might have seen this error before:

A screenshot of a computer

Description automatically generated

With our new behavior this error is a thing of the past and we will evaluate the multipleOrEmptySelectionExpression if present on the calculation group. If that expression is not defined, we will not filter the calculation group.

No selections

One of the best showcases for this scenario is automatic currency conversion. Today, if you use calculation groups to do currency conversion, the report author and user must remember to select the right calculation group item for the currency conversion to happen. With this preview, you are now empowered to do automatic currency conversion using a default currency. On top of that, if the user wants to convert to another currency altogether, they can still do that, but even if they deselect all currencies the default currency conversion will still be applied.

Current

Note how both the currency to convert to as well as the “conversion” calculation group item is selected.

A screenshot of a computer

Description automatically generated

New

Notice how the user must only select the currency to convert to.

A screenshot of a computer

Description automatically generated
Read more about selection expressions in our calculation groups documentation.

The selection expressions for calculation groups are currently in preview. Please let us know what you think!

DAX query view improvements (preview)

We released the public preview of DAX query view in November 2023, and in this release, we made the following improvements:

  1. Re-ordering of query tabs is now available.
  2. The share feedback link has been added to the command bar.
  3. Coach marks for DAX query view.

And we have released additional INFO DAX functions.

Learn more with these resources.

Service

Edit your data model in the Power BI Service – Updates

Below are the improvements coming this month to the data model editing in the Service preview:

Autodetect relationships

Creating relationships for your semantic model on the web is now easier using autodetect relationships. Simply go to the Home ribbon and select the Manage relationships dialog. Then, choose ‘Autodetect’ and let Power BI find and create relationships for you.

A screenshot of a computer

Description automatically generated

Sort by column

Within the web you can now edit the sort by property for a column in your semantic model.

Row-level security

We have made several improvements to the row-level security editor in the web. In the DAX editor you can now do the following actions:

  • Utilize IntelliSense to assist in defining your DAX expression.
  • Verify the validity of your DAX expression by clicking the check button.
  • Revert changes to your DAX expression by selecting the X button.

A screenshot of a computer

Description automatically generated

Please continue to submit your feedback directly in the comments of this blog post or in our feedback forum.

Undo/Redo, Clear all, and New filter cards in Explore

This month we’ve added a few new features to the new Explore experience.

Undo/Redo  

Now it’s simply to undo your previous action or use the ‘Reset all changes’ to go back to the last save state of your exploration.

Note: If you haven’t saved your exploration yet, then reset will clear your canvas back to blank.

A screenshot of a computer

Description automatically generated

Clear all  

The new ‘clear all’ feature allows you to wipe your canvas back to blank. This works great when using Explore as a whiteboarding space, maybe you have a new thought you’d like to explore and want to essentially erase what you have in one click. This is made simple with the new ‘clear all’ option.

A screenshot of a phone

Description automatically generated

New filter card styling  

When using the filtering experience in Explore you’ll now notice an update to the filter cards style and readability. We hope these improvements make filters easier to use and accessible for more users. Let us know what you think!

A screenshot of a graph

Description automatically generated

Deliver report subscriptions to OneDrive SharePoint (Preview) [Nirupama Srinivasan]

You can now send subscriptions to OneDrive SharePoint (ODSP). With this update, all your large reports, both PBIX and paginated reports, can be sent to ODSP. At this time, the workspace must be backed by a premium capacity or equivalent fabric capacity.

A screenshot of a computer

Description automatically generated

We currently support “Standard” subscriptions.

A screenshot of a computer

Description automatically generated

You need to select the “Attach full report” option.

A screenshot of a computer

Description automatically generated

We support more output formats for paginated reports.

A screenshot of a computer

Description automatically generated

Once you select the output format, you can select the OneDrive or SharePoint option, the location and enter the subscription schedule to have your report delivered.

A screenshot of a computer

Description automatically generated

Learn more about subscribing to ODSP here. This feature will start lighting up in certain regions as soon as this week, but depending on the geography in which your Power BI tenant is located, it may take up to three weeks to appear. Also, since this feature will not be supported in Sov clouds while in preview.

Mobile

Custom visual SSO support

Custom visuals that use the new authentication API are also supported when viewed in the Power BI Mobile apps. No additional authentication is required, making sure that the data exploration experience in the mobile app is as delightful as possible, without any interruptions.

Developers

New title flyout for Power BI Desktop developer mode

You can quickly recognize when you are working on a Power BI Project (PBIP) by looking at the title bar:

A screenshot of a computer

Description automatically generated

If you click on the title bar, you will see a new flyout that is specific for Power BI Project. This lets you easily locate the Power BI Project files as well as the display name settings for the report and the semantic model. You can also open the folder in file explorer by clicking on the paths.

A screenshot of a computer

Description automatically generated

Rename to “Semantic Model” in Power BI Project files

Following the rename to “Semantic Model,” announced last November, Power BI Project files (PBIP) also adhere to that naming change. Now, when saving as PBIP, the following changes will be verified:

  • Semantic Model folder, “\*. Dataset\”, will be saved as “\*.SemanticModel\”
  • Only applied to new PBIP files, existing will keep the current folder name.
  • “definition.pbidataset” file is renamed to “definition.pbism”

System file updates for Git integration

Currently, when synchronizing Fabric items with Git, every item directory is equipped with two automatically generated system files—item.metadata.json and item.config.json. These files are vital for establishing and maintaining the connection between the two platforms.

A screenshot of a computer

Description automatically generated

As part of our continuous efforts to simplify the integration with Git, we have consolidated these files into a single system file -. platform. This new system file will encompass all the attributes that were previously distributed between the two files.

A screenshot of a computer

Description automatically generated

Hierarchal Identity filter API

API 5.9.0 introduces a new filter API. This API allows you to create a visual that can filter matrix data hierarchically based on data points. This is useful for custom visuals that leverage group-on keys and allow hierarchical filtering using identities. For more information see the documentation 

Visualizations

New visuals in AppSource

Waterfall-Visual-Extended
Stacked Insights
Waterfall – What’s driving my variation?
Untap Text Box
CloudScope Image

neas-spc
Donut Chart image

orcaviz-enterprise

Dumbbell Bar Chart by Nova Silva

Your valuable feedback continues to shape our Power BI visuals, and we’re thrilled to announce exciting enhancements to the Dumbbell Bar Chart. In the latest release, we’ve introduced the capability to display multiple dumbbell bars in a single row, allowing for the presentation of more than two values in a streamlined manner. This update opens new possibilities, including the creation of the Adverse Event Timeline plot, or AE Timeline.

A screenshot of a graph

Description automatically generated
The AE Timeline serves as a graphical representation of the timing of adverse events in clinical trials or studies. Its primary objective is to visually convey when adverse events occur concerning the timing of treatment or exposure. Widely used in medical research, especially during safety data analysis in drug development, the AE Timeline is now seamlessly available within Power BI.

Experience the enhanced Dumbbell Bar Chart and the innovative AE Timeline by downloading it from AppSource. All features are readily accessible within Power BI Desktop, empowering you to evaluate this visual on your own data. Dive into enhanced functionality and discover new insights effortlessly.

Questions or remarks? Visit us at: https://visuals.novasilva.com/.

Date Picker by Powerviz

The Ultimate Date Slicer for Power BI.

The “First Day of Week” option was added in the recent version update.

The Date Picker visual offers a modern calendar view, Presets, Pop-up mode, Default Selection, Themes, and more, making it a must-have date slicer for Power BI reports.  Its rich formatting options help with brand consistency and a seamless UI experience.

Key Features:

  • Display Mode: Choose between Pop-up and Canvas modes.
  • Presets: Many commonly used presets like Today, Last Week, YTD, MTD, or create your preset using field.
  • Default Selection: Control the date period selected when the user refreshes or reopens the report.
  • Filter Type: Choose between Range and Start/End types.
  • Month Style: Select single- or double-month date slicer.
  • Multiple Date Ranges: Flexibility to select multiple date ranges.
  • Themes: 15+ pre-built themes with full customization.
  • Holidays and Weekends: Customize holidays/weekends representation.
  • Import/Export JSON: Build templates and share your designs.

Many more features and customizable options.

🔗 Try Date Picker for FREE from AppSource

📊 Check out all features of the visual: Demo file

📃 Step-by-step instructions: Documentation
💡 YouTube Video:  Video Link

📍 Learn more about visuals: https://powerviz.ai/

✅ Follow Powerviz: https://lnkd.in/gN_9Sa6U

A screenshot of a calendar

Description automatically generated

A screenshot of a calendar

Description automatically generated

Drill Down Combo PRO

Drill Down Combo PRO lets report creators build impressive charts of categorical data. Choose from multiple chart types and create column, line, area, and their combination charts. Use vast customization options to make your chart unique while enhancing the readability of your data with features like conditional formatting and dynamic thresholds.

MAIN FEATURES:

  • Conditional formatting – compare results against forecasts by automatically adjusting formatting based on a numerical value.
  • Full customization – customize X and Y axes, the legend, outline, and fill settings.
  • Choose normal, 100% proportional, or zero-based stacking.
  • Set up to 4 static and/or dynamic thresholds to demonstrate targets.
  • Customize multiple series simultaneously with series and value label defaults.

POPULAR USE CASES:

  • Sales and marketing – sales strategies, results, marketing metrics
  • Human resources – hiring, overtime, and efficiency ratios by department.
  • Accounting and finance – financial performance by region, office, or business line
  • Manufacturing – production and quality metrics

ZoomCharts Drill Down Visuals are known for interactive drilldowns, cross-filtering, and rich customization options. They support interactions, selections, custom and native tooltips, filtering, bookmarks, and context menu.

Try Drill Down Combo PRO now by downloading the visual from AppSource. 

Learn More about Drill Down Combo PRO by ZoomCharts.  

A screenshot of a graph

Description automatically generated

PDF Uploader/Viewer

Upload and securely share any PDF file with your colleagues.

Introducing our PDF Uploader/Viewer visual!

Simply upload any PDF file and instantly share it with your colleagues.

This visual boasts several impressive capabilities:

  • Microsoft certification ensures that the visual does not interact with external services, guaranteeing that your PDF files are securely stored and encrypted within the report, in alignment with your report sensitivity settings.
  • It automatically saves your preferences, allowing you to navigate pages, adjust the zoom level, and scroll to emphasize specific sections. Your colleagues will view the exact portion of the PDF you highlighted.
  • You have the flexibility to add text or draw lines to underline key content.
  • Users can conveniently download the PDF file directly from the visual.

A screenshot of a computer

Description automatically generated

Learn more: https://appsource.microsoft.com/en-us/product/power-bi-visuals/pbicraft1694192953706.pdfuploaderandviewer?tab=Overview

Inforiver Premium Matrix

Inforiver Premium Matrix by Lumel delivers superior reporting capabilities for financial, paginated, IBCS, variance, management reporting, and executive scorecards with the flexibility and familiar user experience of Excel.

To bring visual formulas and ton of additional functionalities frequently sought after by the Power BI community, Inforiver leveraged a differentiated architecture compared to the native matrix. With the recently released dynamic drill SDK/API, we now offer the Performance Mode, so you don’t have to compromise between the initial load performance offered by the native matrix and the advanced capabilities offered by Inforiver. You can now load the first two levels as the default dimensions of the hierarchy and then drill down to the lower levels as needed on demand, giving you the best of both worlds.

In addition to manual data input and what-if simulation capabilities, Inforiver’ s planning and forecasting capabilities are significantly enhanced with the upcoming 2.8 release. This includes a dedicated forecast toolbar, support for automatic rolling forecasts, dynamic handling of time series extensions, and an option to distribute deficits to other time periods.

Inforiver notes and annotations are now context-aware and are dynamically updated based on the filter/slicer selection.

Try Inforiver today!

A screenshot of a computer

Description automatically generated

YouTube video: https://youtu.be/uBLw8xOWujc

Paginated Reports

Connect to new data sources from Power BI Report Builder (Preview)

You can now connect to new data sources such as Snowflake and Databricks using the “Get Data” button in Power BI Report Builder.

A screenshot of a computer

Description automatically generated

Follow the simple, click-through experience of Power Query online. Select the data source that you want to connect to.

A screenshot of a computer

Description automatically generated

If you want to use AAD, you need to create a shareable cloud connection. You can create one as documented here or use one that has been shared with you.

A screenshot of a computer

Description automatically generated

You might also select the shareable cloud connection from the “Connection” dropdown.  Make sure that the report consumer has permissions to the shareable cloud connection.

Once you have a connection, select Next.

A screenshot of a computer

Description automatically generated

You can transform the data that was selected.

A screenshot of a computer

Description automatically generated

In the Power Query editor, you can perform all the operations supported.  Learn more about the capabilities of the Power Query editor.

A screenshot of a computer

Description automatically generated

The M-Query will be used to build your RDL dataset.

A screenshot of a computer

Description automatically generated

You can use this dataset to build your paginated report. You can publish the report to the service and share it. Learn more about connecting to more data sources from Power BI Report builder here.

Localized parameter prompts in Power BI Report Builder

Need a paginated report to support parameter prompts in more than one language? You no longer need to create several reports. You can simply set an expression for the prompt in Power BI Report Builder and specify the translated labels for a given language that the prompt should be displayed in. Learn more from the documentation on Localizing parameter prompts.

A screenshot of a computer

Description automatically generated

Core

System file updates for Git integration

Currently, when synchronizing Fabric items with Git, every item directory is equipped with two automatically generated system files—item.metadata.json and item.config.json. These files are vital for establishing and maintaining the connection between the two platforms.

A screenshot of a computer

Description automatically generated

As part of our continuous efforts to simplify the integration with Git, we have consolidated these files into a single system file – .platform. This new system file will encompass all the attributes that were previously distributed between the two files.

A screenshot of a computer

Description automatically generated

When you make new changes to Git, your system files will be automatically updated to the new version in conjunction with your modifications. Both your own changes and the new file updates will show as part of the commit operation. Additionally, any new projects exported from Power BI desktop via developer mode will adopt the new system file format, which implies that you need to update to the latest Power BI Desktop version in order to open exported items from Fabric. Beyond these adjustments, there will be no impact on your Git workflow.

More about this file and the attributes within it can be found here.

OneLake

OneLake File Explorer: Editing via Excel

With our latest release v1.0.11.0 of file explorer, we are excited to announce that you can now update your files directly using Excel, mirroring the user-friendly experience available in OneDrive. This enhancement aims to streamline your workflow and provide a more intuitive approach to managing and editing your Excel documents.

Here’s how it works:

  • Open your file using Excel within your OneLake file explorer.
  • Make the necessary updates and save your data.
  • Close the file.

And that’s it! The moment you close the file, your file is updated, and you can view the latest changes through your browser online. This feature offers a convenient, hassle-free way to manage and update your data files via Excel.

Synapse

Data Warehouse

Simplifying table clones: Automatic RLS and DDM Transfer

In the sphere of data management, ensuring the security and confidentiality of sensitive information is critical. As part of our previous releases of table clones, we landed the ability to clone tables within and across schemas as of current point-in-time as well as clone with time travel. However, the process of cloning tables inherently involves cloning the sensitive data they contain, presenting potential risks to data security and privacy. So, the table clones in synapse data warehouse within Microsoft Fabric now offers the innate ability to automatically transfer over the row-level security (RLS) and dynamic data masking (DDM) from the source to the cloned table almost near-instantaneously.

Row-level security (RLS) enables organizations to restrict access to rows in a table. When a table is cloned, the same limitations that exist at the source table are automatically applied to the cloned table as well. Dynamic data masking (DDM) allows organizations to define masking rules on specific columns, thereby helping protect sensitive information from unauthorized access. When a table is cloned, the masking rules that are applied at the source table are automatically applied to the cloned table.

Effective data management is interwoven with robust security practices. During the process of cloning, it is crucial not only to transfer security configurations accurately but also to ensure the tables that are cloned inherit the security and privacy configurations. This helps ensure compliance with the organization’s privacy regulations.

Extract and publish. sqlproj from the Warehouse Editor

We’re excited to announce the ability to extract and publish a SQL database project directly through the DW editor!

SQL Database Projects is an extension to design, edit, and publish schemas for SQL databases from a source-controlled environment. A SQL project is a local representation of SQL objects that comprise the schema for a single database, such as tables, stored procedures, or functions.

This feature enables 3 main uses cases with no need for additional tooling:

Download a database project – can be used to develop DW schema in client tools like SQL database projects in Azure Data Studio or VScode.

Publish existing database projects to a new Fabric Warehouse

Extract a schema from a warehouse/SQL analytics endpoint and publish it to another warehouse.

To extract:

Click download database project in the ribbon (or click on the context menu of the database in the object explorer):

To publish:

Create a brand-new warehouse in the Fabric Portal. Upon entry, select SQL database projects:

A screenshot of a video game

Description automatically generated

Cold query performance improvements

Fabric stores data in Delta tables and when the data is not cached, it needs to transcode data from parquet file format structures to in-memory structures for query processing. With this feature improvement transcoding is optimized further and we observed up to 9% faster queries in our tests when data is not previously cached.

Warehouse Takeover API

Warehouses use the data item’s owner’s identity to connect to OneLake. This causes issues when the owner of the warehouse leaves the organization, has their account disabled, or has a password expired.

To solve this problem, we are happy to announce the availability of the Takeover API, which allows you to change the warehouse owner from the current owner to a new owner, which can be an SPN or an Organizational Account.

For more information, see Change Ownership of Fabric Warehouse.

Data Engineering

Autotune Query Tuning

We’re excited to introduce the Autotune Query Tuning feature for Apache Spark, now available across all regions. Autotune leverages historical data from your Spark SQL queries to automatically fine-tune your configurations with the usage of the newest Machine Learning algorithms, ensuring faster execution times and enhanced efficiency. With Autotune, you can now surpass the performance gains of manually tuned workloads without the extensive effort and experimentation traditionally required. It starts with a baseline model for initial runs and iteratively improves as more data becomes available from repeated executions of the same workload. These smart tuning covers key Spark configurations, including spark.sql.shuffle.partitions, spark.sql.autoBroadcastJoinThreshold, and spark.sql.files.maxPartitionBytes, optimizing your Spark environment dynamically.

To activate on the session level, simply enable it in your Spark session with:

If you use Spark SQL:

%%sql

SET spark.ms.autotune.enabled=TRUE

If you use PySpark:

%%pyspark

spark.conf.set(‘spark.ms.autotune.enabled’, ‘true’)

If you use Scala:

%%spark  

spark.conf.set(“spark.ms.autotune.enabled”, “true”)

If you use SparkR:

%%sparkr

library(SparkR)

sparkR.conf(“spark.ms.autotune.enabled”, “true”)

To enable Autotune Query Tuning for all notebooks and jobs attached to the environment, you can configure the Spark Setting on the environment level. This way, you can enjoy the benefits of automatic tuning without having to set it up for each session.

This feature aligns with our commitment to Responsible AI, emphasizing transparency, security, and privacy. It stands as a testament to our dedication to enhancing customer experience through technology, ensuring that Autotune not only meets but exceeds the performance standards and security requirements expected by our users.

Experimental Runtime 1.3 (Spark 3.5 and Delta 3.0 OSS)

We are introducing the Experimental Public Preview of Fabric Runtime 1.3 — the latest update to our Azure-integrated big data execution engine, optimized for data engineering and science workflows based on Apache Spark.

Fabric Runtime 1.3, in its experimental public preview phase, allows users early access to test and experiment with the newest Apache Spark 3.5 and Delta Lake 3.0 OSS features and APIs.

A screenshot of a computer

Description automatically generated

Queueing for Notebook Jobs

We are thrilled to announce a new feature Job Queueing for Notebook Jobs. This feature aims to eliminate manual retries and improve the user experience for our customers who run notebook jobs on Microsoft Fabric.

Notebook jobs are a popular way to run data analysis and machine learning workflows on Fabric. They can be triggered by pipelines or a job scheduler, depending on the user’s needs. However, in the current system, notebook jobs are not queued when the Fabric capacity is at its max utilization. They are rejected with a Capacity Limit Exceeded error, which forces the user to retry the job later when the resources are available. This can be frustrating and time-consuming, especially for enterprise users who run many notebook jobs.

With Job Queueing for Notebook Jobs, this problem is solved. Notebook jobs that are triggered by pipelines or job scheduler will be added to a queue and will be retried automatically when the capacity frees up. The user does not need to do anything to resubmit the job. The status of these notebook jobs will be Not Started when in queued state and will be changed to In Progress when they start the execution.

We hope that this feature will help our customers run their notebook jobs more smoothly and efficiently on Fabric.

New validation enhancement for “Load to table”

We are excited to announce an enhancement to the beloved “Load to table” feature to help mitigate any validation issues and make your data loading experience smoother and faster.

The new validation features will run on the source files before the load to table job is fired to catch any probable failures that might cause the job to fail. This way, you can fix the issues immediately, without needing to wait until the job runs into an error. The validation features will check for the following:

  • Unsupported table name: The validation feature will alert you if the table name is not in the right format and provide you with the supported naming conventions.
  • Unsupported file extension: The load to table experience currently only supports CSV and Parquet files, therefore the validation feature will alert you if the file is not in one of those formats ahead of time.
  • Incompatible file format: The file format of the source files must be compatible with the destination table. For example, if the destination table is in Parquet format, the source files must be in a format that can be converted to Parquet, such as CSV or JSON. The validation feature will alert you if the file format is not compatible.
  • Invalid CSV file header: If your CSV file header is not valid, the validation feature will catch it and alert you before the job is fired.
  • Unsupported relative path: The validation feature will alert you if the relative path is not supported so you can make the needed changes.
  • Empty data files: The source files must contain some data loaded onto the table. The validation feature will alert you if the source files are empty and suggest you remove them or add some data.

The validation feature is fully integrated with the “Load to table” feature therefore you won’t require any additional steps to leverage this functionality.

We hope you enjoy the new validation enhancement and find it useful for your data loading needs.

Notebook Spark executors resource utilization

We are excited to inform you that the feature for analyzing executors’ resource utilization has been integrated into Synapse Notebook. Now, you can view both the allocated and the running executor cores, as well as monitor their utilization during Spark executions within a Notebook cell. This new feature offers insights into the allocation and utilization of Spark executors behind the scenes, enabling you to identify resource bottlenecks and optimize the utilization of your executors.

A screenshot of a computer

Description automatically generated

Spark advisor feedback setting

We are thrilled to announce the introduction of new feedback settings for the Fabric Spark Advisor. With these settings, you can choose whether to show or hide specific types of Spark advice according to your needs. Additionally, you have the flexibility to enable or disable the Spark Advisor for your Notebooks within a workspace, based on your preferences.

Incorporating the Spark Advisor settings at the Fabric Notebook level allows you to maximize the benefits of the Spark Advisor, while ensuring a productive Notebook authoring experience.

A screenshot of a computer screen

Description automatically generated

Enable upstream view for Notebooks and Spark Job Definitions’ related pipelines

With the introduction of the hierarchy view in the Fabric Monitoring Hub, you can now observe the relationship between the Pipeline and Spark activities for your Synapse Notebook and Spark Job Definition (SJD). In the new ‘Upstream’ column of your Notebook or SJD run, you can see the corresponding parent Pipeline and click to view all sibling activities within that pipeline.

A screenshot of a computer

Description automatically generated

Data Science

New AI Samples

We are happy to announce the addition of three new samples to the Quick Tutorial category of DS samples on Microsoft Fabric. Two of these samples are designed to help streamline your data science workflow, enabling you to automatically and efficiently determine the optimal machine learning model for your case. The third sample walks you through the process to seamlessly access the data in your Power BI semantic model, while also empowering Power BI users to streamline their workflows by leveraging Python for various automation tasks.

Our AutoML sample guides you through the process of automatically selecting the best machine learning model for your dataset. By automating repetitive tasks, such as model selection, feature engineering, and hyperparameter tuning, AutoML allows users to concentrate on data analysis, insights, and problem-solving.

Our Model Tuning sample provides a comprehensive walkthrough of the necessary steps to fine-tune your models effectively using FLAML. From adjusting hyperparameters to optimizing model architecture, this sample empowers you to enhance model accuracy and efficiency without the need for extensive manual adjustments.

Our Semantic Link sample provides a walkthrough on how to extract and calculate Power BI measures from a Fabric notebook using both Sempy Python library and Spark APIs. Additionally, it explains how to use Tabular Model Scripting Language to retrieve and create semantic models, as well as how to utilize the advanced refresh API to automate data refreshing for Power BI users.

We are confident these new samples are useful resources to maximize the efficiency and effectiveness of your machine learning workflows. Please check them out and let us know your thoughts, as we are committed to continually improving your data science experience on Microsoft Fabric.

A screenshot of a computer

Description automatically generated

Accessibility Improvements

Exciting news! We’ve introduced several accessibility enhancements for ML experiments and model items in Fabric. Now, when you resize your window, the item pages will dynamically reflow to accommodate the change, ensuring a seamless user experience and improved accessibility for users with different screen sizes and devices. Additionally, we’ve added the ability to resize the customized columns and filter panels, empowering users to customize their view according to their preferences. Furthermore, users can hover over property, metric, or parameter names to see the full text, which is particularly helpful for quick browsing of the various properties.

A screenshot of a computer

Description automatically generated

Support for Mandatory MIP Label Enforcement

ML Model and Experiment items in Fabric now offer enhanced support for Microsoft Information Protection (MIP) labels. These labels ensure secure handling and classification of sensitive data. With the mandatory enforcement of MIP labels enabled, users are prompted to provide a label when creating an ML experiment or model. This feature ensures compliance with data protection policies and reinforces security measures throughout the development process.

A screenshot of a computer

Description automatically generated

Compare Nested Runs

We have added support for nested child runs in the Run List View for ML Experiments. This enhanced experience streamlines the analysis of nested runs, allowing users to effortlessly view various parent and child runs within a single view and seamlessly interact with them to visually compare results. At its core, MLflow empowers users to track experiments, which are essentially named groups of runs. A “run” denotes a single execution of a model training event, where parameters, metrics, tags, and artifacts associated with the training process are logged. The introduction of parent and child runs introduces a hierarchical structure to these runs. This approach brings several benefits, including organizational clarity, enhanced traceability, scalability, and improved collaboration among team members.

A screenshot of a computer

Description automatically generated

Code-First AutoML in Public Preview

With the new AutoML feature in Fabric, you can automate your machine learning workflow and get the best results with less effort. AutoML, or Automated Machine Learning, is a set of techniques and tools that can automatically train and optimize machine learning models for any given data and task type. You don’t need to worry about choosing the right model and hyperparameters, as AutoML will do that for you. You can also track and examine your AutoML runs using Fabric’s MLFlow integration and use the new flaml.visualization module to generate interactive plots of your outcomes. Fabric also supports many Spark and single-node model learners, ensuring that you can find the best fit for your machine learning problem.

A screenshot of a computer

Description automatically generated

Read this article for more information on how to get started with AutoML in Fabric notebooks.

Code-First Hyperparameter Tuning in Public Preview

Hyperparameters are set prior to the training phase and include elements like learning rate, number of hidden layers in a neural network, and batch size. These settings are crucial as they greatly influence a model’s accuracy and ability to generalize from the training data.

We’re excited to announce that FLAML is now integrated into Fabric for hyperparameter tuning. Fabric’s `flaml.tune` feature streamlines this process, offering a cost-effective and efficient approach to hyperparameter tuning. The workflow involves three key steps: defining your tuning objectives, setting a hyperparameter search space, and establishing tuning constraints.

Additionally, Fabric now also includes enhanced MLFlow Integration, allowing for more effective tracking and management of your tuning trials. Plus, with the new `flaml.visualization` module, you can easily analyze your tuning trial. This suite of plotting tools is designed to make your data exploration and analysis more intuitive and insightful.

A screenshot of a computer

Description automatically generated
Read this article for more information on how to get started with hyperparameter tuning in Fabric.

Real-time Analytics

Eventhouse

Eventhouse is now available for external customers, offering a groundbreaking solution that optimizes performance and cost by sharing capacity and resources across multiple databases. With unified monitoring and management features, Eventhouse provides comprehensive oversight at both aggregate and per-database levels.

This tool efficiently handles large data volumes, making it ideal for real-time analytics and exploration scenarios. It excels in managing real-time data streams, allowing organizations to ingest, process, and analyze data with near real-time capabilities. Eventhouses are scalable, ensuring optimal performance and resource utilization as data volumes grow.

In Fabric, Eventhouses serve as the storage solution for streaming data and support semi-structured and free-text analysis. They provide a flexible workspace for databases, enabling efficient management across multiple projects.

Learn more:

Create an Eventhouse (Preview) – Microsoft Fabric | Microsoft Learn

Eventhouse overview (Preview) – Microsoft Fabric | Microsoft Learn

Eventhouse Minimum Consumption

To optimize costs, Eventhouse suspends the service when not in use, with a brief reactivation latency. For highly time-sensitive systems, use Minimum Consumption to maintain service availability at a selected minimum level, paying for the chosen compute without premium storage charges. This compute is available to all databases within the Eventhouse.

For instructions on enabling minimum consumption, see Enable minimum consumption.

Query Azure Data Explorer data from Queryset

Connecting to and using data in Azure Data explorer cluster is now available from Fabric’s KQL Queryset. This feature enables you to connect to Azure Data Explorer clusters from Fabric using a user-friendly interface. Once a connection is made, you can easily and seamlessly access and analyze your data in Azure Data Explorer.

Fabric’s powerful query management and collaboration tools are now available for you, over Azure Data Explorer clusters data. You can save, organize, and share your queries using Fabric’s KQL Queryset, which supports different levels of sharing permissions for your team members. Whether you want to explore your data, or collaborate on insights, you can do it all with Fabric and Azure Data Explorer.

Learn more: Query data in a KQL queryset – Microsoft Fabric | Microsoft Learn

Update records in a KQL Database (Public Preview)

Fabric KQL Database are optimized for append ingestion.

KQL Databases already support the .delete command allowing you to selectively delete records.

We are now introducing the .update command.  This command allows you to update records by deleting existing records and appending new ones in a single transaction.

This command comes with two syntaxes, a simplified syntax covering most scenarios efficiently and an expanded syntax giving you the maximum of control.

For more details, please go to this dedicated blog.


Recent Enhancements to the Event Processor in Eventstream

Event Processor is a no-code editor in Eventstream that enables you to design stream transformation logic, such as filtering, aggregating, and converting data types, before routing to various destinations in Fabric. With the recent enhancements to the Event Processor, you now have even greater flexibility in transforming your data stream. Here are the updates:

  1. Personalize operation nodes and easily filter out ‘null’ values from your data
  2. Manage and rename your column fields easily in the Aggregate operation
  3. Change your values to different data types using the Manage Field operation

A screenshot of a computer

Description automatically generated

Incoming events throughput upto 100 MB/s

With the introduction of the ‘Event Throughput’ setting, you now can select the incoming events throughput rate for your Eventstream. This feature allows you to scale your Eventstream, ranging from less than 1 MB/s to 100 MB/s.

A screenshot of a computer

Description automatically generated

Retention for longer than 1 day

With the addition of the ‘Retention’ setting, you now can specify the duration for which your incoming data needs to be retained. The default retention period is set to 1 day.

A screenshot of a computer

Description automatically generated

Platform Monitoring

Capacity Metrics support for Pause and Resume

Fabric Pause and Resume is a capacity management feature that lets you pause F SKU capacities to manage costs.  When your capacity isn’t operational, you can pause it to enable cost savings and then later, when you want to resume work on your capacity you can reactivate it.  Fabric Capacity Metrics has been updated with new system events and reconciliation logic to simplify analysis of paused capacities.

Learn more:

Pause and resume your capacity – Microsoft Fabric | Microsoft Learn

Monitor a paused capacity – Microsoft Fabric | Microsoft Learn

Data Factory

Dataflow Gen2

Privacy levels support in Dataflows

You can now set privacy levels for your connections in Dataflows. Privacy levels are critical to configure correctly so that sensitive data is only viewed by authorized users.

Furthermore, data sources must also be isolated from other data sources so that combining data has no undesirable data transfer impact. Incorrectly setting privacy levels may lead to sensitive data being leaked outside of a trusted environment.

You can set this privacy level when creating a new connection:

A screenshot of a computer

Description automatically generated

Enhancement to Manage Connections

Manage connections is a feature that allows you to see briefly the connections that you have in use for your Dataflows and the general information about those connections.

We are happy to release a new enhancement to this experience where now you can see a list of all the data sources available in your Dataflow: even the ones without a connection set for them!

For the data sources without a connection, you can set a new connection from within the manage connections experience by clicking the plus sign in the specific row of your source.

A screenshot of a computer

Description automatically generated

Furthermore, whenever you unlink a connection now the data source will not disappear from this list if it still exists in your Dataflow definition. It will simply appear as a data source without a connection set until you can link a connection either in this dialog or throughout the Power Query editor experience.

Test Framework for Power Query SDK in VS Code

We’re excited to announce the availability of a new Test Framework in the latest release of Power Query SDK! The Test Framework allows Power Query SDK Developers to have access to standard tests and a test harness to verify the direct query (DQ) capabilities of an extension connector. With this new capability, developers will have a standard way of verifying connectors and a platform for adding additional custom tests.  We envision this as the first step in enhancing the developer workflow with increased flexibility & productivity in terms of the testing capabilities provided by the Power Query SDK.

The Power Query SDK Test Framework is available on Github. It would need the latest release of Power Query SDK which wraps the Microsoft.PowerQuery.SdkTools NuGet package containing the PQTest compare command.

What is Power Query SDK Test Framework?

Power Query SDK Test Framework is a ready-to-go test harness with pre-built tests to standardize the testing of new and existing extension connectors by providing ability to test functional, compliance and regression testing that can be extended to perform testing-at-scale. It will help address the need for a comprehensive test framework to satisfy the testing needs of extension connectors.

A diagram of a software structure

Description automatically generated

Follow the links below to get started:

General Availability of VNET Gateway

The VNET Data Gateway for Fabric and Power BI is Generally Available!

The VNET Data Gateway is a network security offer that lets you connect your Azure and other data services to Microsoft Fabric and the Power Platform. You can run Dataflow Gen2, Power BI Semantic Models, Power Platform Dataflows, and Power BI Paginated Reports on top of a VNET Data Gateway to ensure that no traffic is exposed to public endpoints. In addition, you can force all traffic to your data source to go through a gateway, allowing for comprehensive auditing of secure data sources. To learn more and get started, visit VNET Data Gateways.

Browse Azure resources in Get Data

Using the regular path in Get Data to create a new connection, you always need to fill in your endpoint, URL or server and database name when connecting to Azure resources like Azure Blob, Azure Data Lake gen 2 and Synapse. This is a bit of a tedious process and does not allow for easy data discovery.

With the new ‘browse Azure’ functionality in Get Data, you can easily browse all your Azure resources and automatically connect to them, without going through manually setting up a connection, saving you a lot of time.

A screenshot of a computer

Description automatically generated

Browse Azure resources with Get Data | Microsoft Fabric Blog | Microsoft Fabric

Block sharing SCC tenant level

By default, any user in Fabric can share their connections if they have the following user role on the connection:

  • Connection owner or admin
  • Connection user with sharing

Sharing a connection in Fabric is sometimes needed for collaboration within the same workload or when sharing the workload with others. Connection sharing in Fabric makes this easy by providing a secure way to share connections with others for collaboration, but without exposing the secrets at any time. These connections can only be used within the Fabric environment.

If your organization does not allow connection sharing or wants to limit the sharing of connections, a tenant admin can restrict sharing as a tenant policy. The policy allows you to block sharing within the entire tenant.


A screenshot of a cloud connection

Description automatically generated

Allow schema changes in output destinations

When loading into a new table, by default the automatic settings are on. Using the automatic settings, dataflows gen 2 manages the mapping for you. This will allow you the following behavior:

  • Update method replace: Data will be replaced at every dataflow refresh. Any data in the destination will be removed. The data in the destination will be replaced with the output data of the dataflow.
  • Managed mapping: Mapping is managed for you. When you need to make changes to your data/query to add an additional column or change a data type, mapping is automatically adjusted for this when you republish your dataflow. You do not have to go into the data destination experience every time you make changes to your dataflow, allowing you for easy schema changes when you republish the dataflow.
  • Drop and recreate table: To allow for these schema changes, on every dataflow refresh, the table will be dropped and recreated. Your dataflow refresh will fail if you have any relationships or measures added to your table.

A screenshot of a computer

Description automatically generated

Manual settings

By un-toggling the use automatic setting, you get full control over how to load your data into the data destination. You can make any changes to the column mapping by changing the source type or excluding any column that you do not need in your data destination.

A screenshot of a computer

Description automatically generated

Cancel Dataflow Refresh

Canceling a dataflow refresh is useful when you want to stop a refresh during peak time, if a capacity is nearing its limits, or if refresh is taking longer than expected. Use the refresh cancellation feature to stop refreshing dataflows.

To cancel a dataflow refresh, select Cancel refresh option found in workspace list or lineage views for a dataflow with in-progress refresh:


A screenshot of a computer

Description automatically generated

Once a dataflow refresh is canceled, the dataflow’s refresh history status is updated to reflect cancelation status:

A white background with a black and white image

Description automatically generated with medium confidence

Certified connector updates 

We’re pleased to announce the following updates to certified connectors: 

Data pipeline

UC Support in Azure Databricks activity

We are excited to announce that Unity Catalog support for Databricks Activity is now supported. With this update, you will now be able to configure your Unity Catalog Access Mode for added data security.

Find this update under Additional cluster settings. 

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

For more information about this activity, read https://aka.ms/AzureDatabricksActivity

Semantic Model Refresh activity

We are excited to announce the availability of the Semantic Model Refresh activity for data pipelines. With this new activity, you will be able to create connections to your Power BI semantic model datasets and refresh them.

A screenshot of a computer

Description automatically generated

To learn more about this activity, read https://aka.ms/SemanticModelRefreshActivity

Performance tuning tips: improve performance tuning tips experience including wording, visualization, etc.

The more intuitive user experience and more insightful performance tuning tips are available in Data Factory data pipelines. These tips will provide useful and accurate advice regarding staging, degree of copy parallelism settings, etc. to optimize your pipeline performance.

A screenshot of a computer

Description automatically generated

On-Premises Connectivity with Fabric Pipeline Public Preview

On-premises connectivity for Fabric Pipeline is public preview now. This enhancement empowers users to effortlessly transfer data from their on-premises environments to Fabric OneLake, Fabric’s centralized data lake solution.

With this capability, users can harness high-performance data copying mechanisms to efficiently move their data to Fabric OneLake. Whether it’s critical business information, historical records, or sensitive data residing in on-premises systems, on-premises connectivity ensures seamless integration into Fabric’s centralized data lake infrastructure.

On-premises connectivity with Fabric pipeline enables faster and more reliable data transfers, significantly reducing the time and resources required for data migration and integration tasks. This efficiency not only streamlines data integration processes but also enhances the accessibility and availability of on-premises data within the Fabric ecosystem.

Data Activator

New Expressions “Changes by”, “Increases by”, and “Decreases by”

When setting conditions on a trigger, we’ve added a feature that allows you to detect when there’s been a change in your data by absolute number or percentage.

You can also specify whether the condition should be in comparison to the last measurement or from a specified point in time, which is denoted as “from time ago”. “From last measurement” computes the difference between two consecutive measurements, regardless of the amount of time that elapsed between the two measurements.

Meanwhile, “from time ago” compares your data to a previous point in time that you have specified. For example, you can monitor your refrigerator temperature and see if the temperature has changed from 32 degrees five minutes ago. If the temperature changes after five minutes, the trigger will be sent an alert. However, if the temperature within those five minutes spiked then fell back to 32 degrees, the trigger will not send an alert.

When setting conditions on a trigger, we’ve added a feature that allows you to detect when new data does or doesn’t arrive on a specified column.

To use “New data arrives”, you simply specify the column you want to monitor in the “Select” card. In the “Detect” card, specify that you want to monitor “New data arrives”. Your trigger will now send an alert every time new data comes in. Note, even if new data comes in and the “value” of that data is the same, you will be sent an alert. Also, keep in mind that null values will not cause an alert.

For example, suppose you want to be sent an alert every time there’s new data on a truck’s location. If the system gets data that says the truck is in Redmond, an alert will be sent. Next, if the system gets data that says the truck is in Bellevue, an alert will be sent. Then if the system gets more data that says the truck is in Bellevue, an alert will be sent.

To use “No new data arrives”, in the “Detect” card, you need to specify the duration over which the trigger monitors. Duration is the maximum time that you want the trigger to monitor if new data has come in. If new data has not come in, an alert will be sent.

For example, suppose you have a temperature sensor that sends data every second. You want to be alerted if the sensor stops sending data for more than 10 seconds. You can set up the “No new data arrives” condition with duration = 10. If the sensor keeps sending data, you will not get any alert.

Other

Compliance

Microsoft Fabric is now HIPAA compliant. We are excited to announce that Microsoft Fabric, our all-in-one analytics solution for enterprises, has achieved new certifications for HIPAA and ISO 27017, ISO 27018, ISO 27001, ISO 27701. These certifications demonstrate our commitment to providing the highest level of security and privacy for our customers’ data.  Read the full announcement.

Related blog posts

Microsoft Fabric March 2024 Update

October 30, 2024 by Patrick LeBlanc

Welcome to the October 2024 Update! Here are a few, select highlights of the many we have for Fabric this month. API for GraphQL support for Service Principal Names (SPNs). Introducing a powerful new feature in Lakehouses: Sorting, Filtering, and Searching capabilities. An addition to KQL Queryset that will revolutionize the way you interact with … Continue reading “Fabric October 2024 Monthly Update”

October 4, 2024 by Jason Himmelstein

We had an incredible time in our host city of Stockholm for FabCon Europe! 3,300 attendees joined us from our international community, and it was wonderful to meet so many of you in person. Throughout the week of FabCon Europe, our teams published a wealth of valuable content, and we want to ensure you have … Continue reading “Fabric Community Conference Europe Recap”