Microsoft Fabric Updates Blog

Microsoft Fabric October 2023 update

We have a lot of features this month including updates to the Item type icons, Keyword-Based Filtering of Tenant Settings, On-object Interaction updates, Eventstream Kafka Endpoints and many more. Continue reading for more details on our new and updated features!

Core

Admin

Power BI

Synapse

Data Factory

Data Activator

Core

Item type icons

We’ve heard your feedback that the icons across Fabric are difficult to tell apart from large list views and other areas of the UI, and that the overall appearance of them was overly monochromatic and flat.

Our design team has completed a rework of the item type icons across the platform to improve visual parsing and enrich the way users may form a habituated understanding of what icons operate within different semantic categories of working with data.

To learn more about the thinking behind this update, see the detailed blog 

Admin

Keyword-Based Filtering of Tenant Settings

Microsoft Fabric has recently introduced keyword-based filtering for the tenant settings page in the admin portal. This new feature allows admins to locate the necessary tenant settings quickly and easily by filtering them based on keywords.

To use keyword-based filtering, tenant admins simply enter the keywords they are looking for into the search bar at the top of the tenant settings page. The portal will then filter and display only those settings with matching keywords in the title or description. For instance, an admin could search for “preview” to access all tenant settings currently in preview, or “B2B” to view settings associated with the B2B data sharing feature.

To learn more about the feature, check out this blog post.

Power BI

Reporting

Power BI Desktop OneDrive and SharePoint integration

OneDrive and SharePoint integrations are now easier than ever in Power BI Desktop! While the ability to open, save, and share reports in OneDrive and SharePoint was released to preview in May, the capabilities are now improved and on by default.

You can:

  • Open reports stored in OneDrive and SharePoint through the file menu.
  • Save files directly to OneDrive and SharePoint.
  • Share reports stored in OneDrive and SharePoint directly from Power BI Desktop.

These are important changes because many Power BI authors use OneDrive and SharePoint to collaborate on their reports before publishing through the Power BI service. These new updates will streamline their workflow. Ad-hoc reporting is made easier and simpler, and new users starting out will be comfortable with the familiar Office interface.

The new Desktop features are complimented by the ability to view Power BI reports stored in OneDrive and SharePoint directly in your browser. Previously, viewing a report stored in OneDrive or SharePoint required downloading both the Power BI file, and Power BI Desktop. The new capability allows users to interact with their reports in seconds.

We understand that every organization has its unique needs and preferences. If you prefer not to have some of these features available in your organization, learn how to turn them off in the Fabric admin portal.

 

Learn more about these features in our public preview announcement blog post.

On-object Interaction Updates (preview)

New! Date Hierarchy on data flyout

When working with dates, you may choose to swap from a hierarchy to the raw date field. This is still available when right clicking directly on the date field. For greater discoverability, we’ve now added the ability to switch between date hierarchy and raw date on the data flyout as well.

Option 1: Swap using right click.

A screenshot of a computer

Description automatically generated

New! Option 2: Swap using data flyout.

A screenshot of a graph

Description automatically generated

New! Placeholder text for direct text editing

Previously, when turning on a text element such as a title for Tables, there was no reserved space to begin typing in characters directly on-object. Users had to go to the format pane to add a title. Now, when turning on a text element that does not have an “auto” value, we show a placeholder for users to begin typing directly on the visual.

A screenshot of a computer

Description automatically generated

When using direct text edit, you’ll also see placeholders appear if you accidentally delete all the characters, but the text element is still ON. This again reserves the space for you to be able to come back and a text value using on-object. Placeholders only appear when the visual(s) are selected while editing, deselecting the visual will remove all placeholders so authors can preview what will be published.

New! Ribbon and Funnel charts now support on-object formatting.

A screenshot of a computer

Description automatically generated

A screenshot of a graph

Description automatically generated

Power BI Home in Desktop

A screenshot of a computer

Description automatically generated

Deduplication rules for composite models on Power BI Datasets and Analysis Services

Tables and measures in your model must have unique names. If you use composite models on Power BI Datasets and Analysis Services, it’s easy to get into a situation where tables and measure names have duplicate names and are not unique. Up to this point, when that happened one of the tables or measures would be renamed for you. For example, if you created a composite model from two sources and both sources defined a table called ‘Customers’”, one of the tables would be renamed ‘Customer 2’. This resulted in confusing situations as it was not clear which source the ‘Customer 2’s table came from. The same applies to measures: if you had two sources that both contained a measure called ‘Total Sales’ one would be renamed ‘Total Sales 2’ in the composite model.

This month we are giving you more control! You can now apply a name disambiguation rule to a source in a composite model when you anticipate name conflicts with tables or measures from another source. You can set up a text to be added as a prefix or suffix to table names, measure names or both. Additionally, you can choose to add that text only when a duplication occurs or if you prefer to have it added all the time. Going back to the example above, let’s say that one of the sources you are combining is for marketing and the other is for sales. You can now set up a deduplication rule on the source connection so the ‘Customer’ table from the marketing source is named ‘Customer (marketing)’:

A screenshot of a computer

Description automatically generated

You will find these options under Settings in the dialog that shows when you set up the composite model connection to a Power BI dataset or Analysis Services model:

A close-up of a computer screen

Description automatically generated

After you make the connections and set up the deduplication rule, your field list will show both ‘Customer’ and ‘Customer (marketing)’ according to the deduplication rule you set up:

A screenshot of a computer

Description automatically generated

Note that you can:

  • Specify whether you want the text to be added to the table or measure name as a prefix or a suffix.
  • Apply the deduplication rule to tables, measures, or both.
  • Choose to apply the deduplication rule only when a name conflict occurs or apply it all the time. The default is to apply the rule only when duplication occurs. In our example, any table or measure from the marketing source that does not have a duplicate in the sales source will not get a name change.

If you do not specify a deduplication rule or the deduplication rules you specified do not resolve the name conflict the standard deduplication actions are still applied.

Read more about composite models on Power BI datasets and Analysis Services in our documentation.

Modeling

Edit your data model in the Power BI Service – Updates

The new data model editing in the Service feature was released to preview in April. We’ve been busy reacting to your feedback and enhancing the experience. Below are the improvements coming later this month:

Manage relationships dialog.

Now you can easily view and edit all the relationships within your data model in the Service! In the Home tab, simply select the ‘Manage relationships’ button.

A screenshot of a computer

Description automatically generated

This will open the revamped ‘Manage relationships’ dialog, which provides a comprehensive view of all your relationships, along with their key properties, in one convenient location. From here you can then choose to create new relationships or edit an existing relationship.

A screenshot of a computer

Description automatically generated

Additionally, you have the option to filter and focus on specific relationships in your model based on cardinality and cross filter direction.

A screenshot of a computer

Description automatically generated

Mark as date table

Within the Service, you can now mark a table in your data model as a date table. Marking a date table in your model allows you to use this table for various date-related elements including visuals, tables, quick measures, and more, with full Time Intelligence support. To set a date table in the Service, right-click on the desired table and choose ‘Mark as date table > Mark as date table’ in the menu that appears.

A screenshot of a computer

Description automatically generated

Next, specify the date column by selecting it from the dropdown menu within the ‘Mark as date table’ dialog. Power BI will then perform validations on the selected column and its data to ensure it adheres to the ‘date’ data type and contains only unique values.

A screenshot of a computer

Description automatically generated

Please continue to submit your feedback directly in the comments of this blog post or in our feedback forum.

Model explorer public preview with calculation group authoring and creating relationships in the properties pane.

We are excited to announce that the model explorer is now available for public preview in the model view. You can see all your dataset semantic modeling objects in one place and easily navigate between them. Finally, full visibility of the semantic model!

A screenshot of a computer

Description automatically generated

An additional properties pane now shows for the dataset semantic model.

And new icons are also showing on the Data pane! Additional UI changes will continue through December for the Data pane and properties pane.

A screenshot of a search box

Description automatically generated

Also note that measure groups will always show at the top, followed by calculation groups, and then finally the other tables in the model.

From the model explorer, not only can you now see the calculation groups with their calculation items, but you can also create and edit them in Desktop! Calculation groups are a powerful feature that allows you to apply dynamic calculations to your existing measures. For example, you can create a calculation group that applies time intelligence functions as calculation items, such as year-to-date, quarter-to-date, or month-to-date, to any measure in your model. Learn more at aka.ms/calculationgroups. To author these in Desktop, you go to the model view and click on the new “Model” tab of the Data pane. If you click on the Calculation groups node, you have three options to create a new one. (1) Ribbon button, (2) Context menu, (3) Properties pane.

A screenshot of a computer

Description automatically generated

After clicking “New calculation group”, if you do not have the model property discourage implicit measures turned off, you will be told this setting needs to be on to create the calculation group.

A screenshot of a white and black text

Description automatically generated

Once turned on, the calculation group is created, and you are landed directly in the first calculation item to define the DAX expression you want to apply to your existing measures.

A screenshot of a computer

Description automatically generated

You can alter this DAX expression in the DAX formula bar. Optionally, you can also add a dynamic format string to the calculation item from the properties pane.

New calculation items can be created from the context menu (right-click) of the Calculation items node or the calculation group, and in the properties pane of the Calculation items node.

A screenshot of a computer

Description automatically generated

Controlling the order of the calculation items can be done in the Calculation items node properties pane or using the context menu (right-click) of the calculation items themselves!

A screenshot of a computer

Description automatically generated

Known issues to be fixed before public release:

  1. Renaming calculation items will not reflect in the report view slicer until another calculation item added or deleted.
  2. Changing the order of calculation items in properties pane moves away from the properties pane.
  3. Changing precedence of the calculation groups in the properties pane moves focus away from the properties pane.
  4. Is hidden showing “Mixed values.”
  5. Cursor not being in the calculation item on creation.

Another option also now available from the model explorer is the ability to create relationships in the properties pane. Just like edit relationships in the properties pane allows you to simply change table and columns without previewing data or validating until you click Apply change, this experience is now available to create relationships too. Simply choose new relationship from the model explorer Relationships node.

A screenshot of a computer

Description automatically generated

This will show an empty relationship properties pane to fill out and then apply when ready!

A screenshot of a computer

Description automatically generated

Learn more about adding and editing relationships at Create and manage relationships in Power BI Desktop – Power BI | Microsoft Learn.

Known issues to be fixed:

  1. Setting the relationship as one too many even in a valid way will result in an error. This has been fixed but waiting for the changes to be included in Desktop, which may be after public release.

The model explorer does show you other modeling features that do not yet have authoring paths in Desktop. These features are perspectives and cultures. These will still need to be authored outside of Desktop throughout XMLA write external tools or through XMLA directly. Learn more about XMLA write at Dataset connectivity and management with the XMLA endpoint in Power BI – Power BI | Microsoft Learn and external tools at External tools in Power BI Desktop – Power BI | Microsoft Learn. Finally, learn more about perspectives at Perspectives in Analysis Services tabular models | Microsoft Learn which work well with the personalized visuals feature of Power BI reports at Let users personalize visuals in a report – Power BI | Microsoft Learn. And cultures is the translations features of semantic models, learn more at Translations in Analysis Services tabular models | Microsoft Learn.

These features are available in the latest version of Power BI Desktop. To use them please turn on the model explorer public preview switch. Go to File > Options and settings > Options > Preview features in the GLOBAL section.

A screenshot of a computer

Description automatically generated

Data connectivity

Snowflake (Connector Update)

The Snowflake connector has been updated to support better implementation of “LIMIT 1” queries, resulting in performance improvements.

 

Planview OKR (New Connector)

 

We are excited to release the new Planview OKR connector. Here are the release notes from the Planview team.

 

Planview Objectives and Key Results (OKRs) are an outcome-driven framework adopted by organizations who want to define key organizational goals and track progress toward achieving them. Defining OKRs creates organizational clarity by enabling organizations to answer the questions “Where do we want to go?” (objectives) and “How will we measure our efforts to get there?” (key results). OKRs can be created at different levels of an organizational structure – such as enterprise, portfolio, program, or team – and are connected using parent/child relationships. Linking organizational and team goals in a hierarchical way aligns work delivery to company strategy and provides a single line of sight into value delivered by the organization. Connect now to your OKR data with the custom connector Planview OKR.

 

BitSight Security Ratings (Connector Update)

 

The BitSight Security Ratings connector has been updated with minor bug fixes.

Starburst Enterprise (Connector Update)

 

The Starburst Enterprise connector has been updated. Here are the release notes from the Starburst team:

  • Added optional Connection string (non-credential properties) in the Advanced Section to allow the use of other ODBC connection parameters.
  • Changed data source display from Starburst Enterprise to a value based on the connection details ({“Host”:”sep.example.com”, “Port”:”443”}) to allow distinction between multiple Starburst Galaxy or clusters connected as separate data sources.
  • Fixed an issue with port 8443 for OAuth 2.0 authentication.
  • Fixed query folding with timestamp columns.

Service

OneLake data hub – Quick access to your data

OneLake data hub is the central location for users to find and reuse existing organizational data. It allows users to browse through their data and discover insights that can help them make better decisions. We are happy to introduce some of the new features that we recently added to enhance the OneLake data hub discovery experience.

Explorer pane – Quick access

The Explorer pane enables users to navigate through the workspaces hierarchy and scope the data items to a specific workspace. With the recent enhancements, we added a Quick access section at the top of the Explorer pane. The Quick access section contains an active workspace, pinned workspaces, and recently used workspaces.

Favorite items

Another new feature is the favorite items. Users can now filter to view items that were marked previously as favorite in Power BI. Users can also find favorite items from within the data hub experience, and they will also show up across all the experiences including Home and Browse. This feature helps users to keep track of the data items that are most important for them.

A screenshot of a computer

Description automatically generated

Visualizations

New visuals in AppSource

ValQ Plan

valQ Plan is a newer and re-architected Power BI-certified edition of valQ with numerous transformational updates and a simplified no-code experience.

valQ helps users build complex business plans, what-if simulations, create & compare budgets, forecasts and scenarios – all within Microsoft Power BI. It can be used as your standalone and integrated business planning software tool or used in conjunction with your existing planning platforms.

YT video URL – https://www.youtube.com/watch?v=w4AFQzr58j4

Feature highlights:

  1. Niche Use Cases such as Scenario Planning, Strategic Planning or Value Drive Trees and What-If Simulations.
  2. Typical Planning use cases: Financial planning & analysis (FP&A), 3-statement financial modelling, and Supply Chain Demand Planning etc.
  3. Excel-like grid input experience for your rolling forecasts with no-code auto forecasting algorithms, and goal seek.
  4. Integrated reporting: One-click IBCS templates for variance reporting, presentation mode with live charts and tables, enhanced export to Excel & PDF
  5. Intuitive modelling: Significantly enhanced formula editor experience, dynamic templating and more
  6. Time intelligence: Supports more than 12 periods allowing you to create 90-day rolling forecasts, 52-week forecasts, and even 10-year strategic plans.

ValQ Plan can be purchased directly from Microsoft AppSource.

A screenshot of a computer

Description automatically generated

Date Picker by Powerviz

The Ultimate Date Slicer for Power BI.

 

The Date Picker visual comes with a modern calendar view along with highly requested features like Presets, Pop-up mode, Default Selection, Themes, and more.

 

This is a must-have date slicer for all the Power BI reports. It has rich formatting options to match your brand style guide and to meet your business needs.

 

KEY FEATURES:

  • Display Mode: Choose between Pop-up and Canvas modes.
  • Presets: Many commonly used presets like Today, Last Week, YTD, MTD, or create your own preset using field.
  • Default Selection: Enforce the selection of the desired period when a user opens the report without any custom DAX.
  • Filter Type: Choose between Range and Start/End types.
  • Multiple Date Ranges: Flexibility to select multiple date ranges.
  • Themes: 15+ pre-built themes with full customization.
  • Holidays and Weekends: Multiple formatting options.
  • Import/Export JSON: Build templates and share your designs.

Many more features and customizable options.

A screenshot of a computer

Description automatically generated

Check out the video – Introducing Date Picker by Powerviz.

Get Powerviz Date Picker for FREE from AppSource.

Download the demo file here.

Step-by-step guide and detailed documentation of all features.

To learn more, visit Powerviz website.

Drill Down Network PRO: Show Categorical Relationships with Ease

Drill Down Network PRO by ZoomCharts is designed for effortlessly visualizing categorical data and automatically detects relationships based on category structure. Use your existing category-based data with few adjustments, and quickly create an interactive chart that makes relationships between each data category easy to read. Visit our site to learn more!

 

Main features:

 

  • Cross-chart filtering – eliminate slicers by selecting data points directly on the charts.
  • Category-based customization – apply image to node and choose its type, shape, color, font.
  • Link styling – configure ‘from’ and ‘to’ decorations and show link value.
  • Display up to 9 data categories.
  • Touch-input device friendly – explore data on any device.

Popular use cases:

 

  • Accounting & Finance – show cost attribution.
  • Human Resources – analyze salary data by department.
  • Production – map production volumes by product or factory.
  • Sales & Marketing – visualize marketing campaigns.

Screens screenshot of a cell phone

Description automatically generated

ZoomCharts Drill Down Visuals are known for interactive drilldowns, smooth animations, and rich customization options. They support interactions, selections, custom and native tooltips, filtering, bookmarks, and context menu. Use them to create visually appealing and intuitive reports that business users will love on every device. 

 

Try Drill Down Network PRO now from the AppSource!

TMap 2.1

Newly released TMap 2.1 has been added new features for drilling down Donut Map, Choropleth Map, Bar Chart Map, Pie Chart Map and Stacked Bar Chart Map by georegion’s names.

Drilling down maps by georegion’s names doesn’t need to prepare a hierarchy of pre-built polygon layers and will save data professional’s time and cost to extract deeper insights from the data on different levels of geographic regions.

Screenshot 1 (Drilldown donut map for company A’s sale in Asia by subregions)

A map of the world with red circles

Description automatically generated

Realigning georegions will become much easier because you only need to change georegion’s name in the lowest level and the polygon layer for upper level will be automatically generated during the drilling down process.

Screenshot2 (Drilldown donut map for company A’s sale in Asia by realigned subregions)

A map of asia with red and blue circles

Description automatically generated

You can go to Microsoft AppSource( https://appsource.microsoft.com/en-us/product/power-bi-visuals/mylocsinc1648311649136.tmap?exp=ubp8) to download and try it.

To learn more on how to use it, please read tutorials (https://www.mylocs.ca/tutorials.html#drilldown-donut-map-name).

Inforiver Enterprise brings writeback to Fabric Lakehouse & Warehouse

Inforiver now supports three new data writeback destinations from Power BI: Microsoft Fabric Lakehouse, Fabric Datawarehouse, and Dataverse, along with other popular DWH, DL, and Databases. Here is our 2-minute demo and overview

A diagram of a data flow

Description automatically generated

The latest Inforiver Enterprise release supports writeback to Fabric for these use cases.

  • Create business projections by entering/editing directly in Power BI reports.
  • Create & manage rolling forecasts by blending actual & forecast data series.
  • Create dynamic simulations at the enterprise, BU, or department level and roll up projections (to the top) or distribute projections (to granular dimensions) based on chosen allocation/distribution methods.
  • Create multiple scenarios, each containing its own set of simulations.
  • Write back data, including comments & threaded conversations
  • Create periodic report snapshots.
  • Support multiple data input types and writeback from multiple Power BI users in reading mode.

The above, combined with advanced audit, security & governance capabilities, make Inforiver the most advanced data input and write-back solution in the market.

Inforiver supports writeback in the following deployment configurations: (a) Azure SAAS service managed by Inforiver or (b) managed by customers in their private tenant. Visit our FAQ page to learn more.

For Fabric Writeback Proof of Concept (PoC), contact Inforiver here.

Synapse

Data Warehouse

V-Order write optimization

V-Order optimizes parquet files to enable lightning-fast reads under the Microsoft Fabric compute engines such as Power BI, SQL, Spark and others. Warehouse queries in general benefit from faster read times with this optimization, still ensuring the parquet files are 100% compliant to its open-source specification. Starting this month, all data ingested into Fabric Warehouses use V-Order optimization.

SKU guardrails for burstable compute

Synapse Data Warehouse on Microsoft Fabric has the desired flexibility to allow better performance under peak demand by providing burstable compute. SKU Guardrails ensure that customers are operating within the right boundaries for their capacity, preventing peak workloads from consuming all capacity units in a short duration.

To learn more about SKU guardrails for burstable compute, check out the blog Data Warehouse SKU Guardrails for Burstable Capacity

Data Science

Semantic Link (Public Preview)

We are pleased to introduce the Public Preview of Semantic Link, an innovative feature that seamlessly connects Power BI datasets with Synapse Data Science within Microsoft Fabric. As the gold layer in a medallion architecture, Power BI datasets contain the most refined and valuable data in your organization. With Semantic Link, we unlock this data’s potential beyond traditional business intelligence by making it accessible to notebooks and Python in Microsoft Fabric.

Real-time Analytics

KQL Database Capacity Reporting

A KQL Database will utilize capacity via Operations that can be monitored using the Microsoft Fabric Capacity Metrics.

KQL Database Consumption – This is the number of seconds that your KQL database is active in relation to the number of virtual cores used by your database. For example, if my database uses 4 virtual cores and is active for 10 minutes then you will utilize 2,400 seconds of Capacity Units. An auto-scale mechanism is utilized to determine the size of your KQL database. This ensures the most cost-optimized and best performance based on your usage pattern.

Read more in the blog: Understanding Fabric KQL DB Capacity | Microsoft Fabric Blog | Microsoft Fabric

KQL Database Auto scale algorithm – improvements

Users do not need to worry about how many resources are needed to support their workloads in a KQL database. KQL Database has a sophisticated in-built auto scaling algorithm. The algorithm ensures that the optimal number of resources are allocated to support the workloads, with minimum cost. The auto scaling algorithm is multi-dimensional, based on the following dimensions:

  • Hot cache- how much data to store in hot cache for immediate response. This metric is impacted by the caching policy defined by the customer.
  • Memory – how much memory required for meta data and data for optimal query performance.
  • CPU usage- how much compute resources are needed to process queries, update policies, materialized views etc
  • Ingestion – how much compute resources are needed to for data ingestion based on ingestion rates and load times.

Filtering and visualizing Kusto data in local time with special PowerBI optimization

Datetime values in Kusto (aka ADX/KQL database in Fabric) are assumed to be in UTC.

There are good reasons why you should always keep it this way. On the other hand, in many cases you want to visualize the datetime values in a specific time zone and filter the data using values expressed in local time. This is correct but may lead to severe performance degradations if not done correctly.

We recently implemented some optimizations that will make some such scenarios much more efficient. In addition, for the many users who are using PowerBI with Fabric KQL database for time series analysis, here is the optimal recommendation: Create a function that will receive the time range as parameters, shift it to UTC, filter the table and shift the filtered rows from UTC to the same time zone.

For more details and examples refer to the following blog: Filtering and visualizing Kusto data in local time

Eventstream UX Improvement on Event Processor

The Event Processor within Eventstream is a powerful no-code editor, enabling you to process and manage your real-time data streams efficiently. You can easily aggregate and filter data streams using temporal functions before they reach your lakehouse or Kusto database. The recent UX improvements introduce a full-screen mode, providing a more spacious workspace for designing your data processing workflows. The insertion and deletion of data stream operations have been made more intuitive, making it easier to drag and drop and connect your data transformations.

Eventstream Kafka Endpoints and Sample Code

We’ve expanded the Custom App feature with a range of new endpoints in sources and destinations. Now, you can seamlessly connect your applications to Fabric Eventstream using protocols like EventHub, AMQP, and Kafka. To simplify your setup process, we’ve included sample Java code for your convenience. Simply add it to your application, and you’ll be all set to stream your real-time event to Eventstream.

Data Factory

Dataflow Gen2

Data connectivity

SAP HANA (Connector Update)

The update enhances the SAP HANA connector with the capability to consume HANA Calculation Views deployed in SAP Datasphere by taking into account SAP Datasphere’s additional security concepts. This enables consumption of Calculation Views in Datasphere and allows customers to connect to HANA Cloud views without the need for additional privileges on the _SYS_BI schema.

New certified connector: Emplifi Metrics

We are happy to announce the release of the new Emplifi Metrics connector. Please find release notes from the Emplifi team below:

“Integrating social media insights alongside the rest of your marketing or business intelligence data gives you a holistic understanding of your entire digital strategy, all in one place. With Emplifi Power BI Connector, you’ll be able to include social media data from the Emplifi Platform in your charts and graphs and combine them with other data you own.

The Power BI Connector is a layer between Emplifi Public API and Power BI itself. It helps you work with your data intuitively, directly in the Power BI tool. Most data and metrics available in the Emplifi Public API are also available in the Connector.

Please visit the official documentation for more information about Emplifi Public API and a list of available metrics. You will find it here: https://api.emplifi.io/.”

Bug fixes and reliability improvements

We would like to thank our community for reporting issues and providing feedback through the Data Factory Community Forum.

We continue to work on improving the overall experience and reliability of Dataflow Gen2 in Microsoft Fabric. With over 400 work items closed in the last month, you should notice a better experience in the overall Dataflow Gen2 authoring and refresh experience.

We encourage you to visit our community forum and provide any feedback or inquire about any possible issues that you might have with Dataflow Gen2. Your feedback is helping us make this a better product each day.

Pipelines

Activities

AML activity

We’re excited to announce that the Azure Machine Learning activity is now available to use in your Data Factory data pipelines. In your pipeline, you can use the Azure Machine Learning activity to connect to your Machine Learning pipelines or enable batch prediction scenarios such as identifying possible loan defaults, determining sentiment, and analyzing customer behavior patterns.

Deactivate/reactivate activity state (Preview)

We’re excited to share that you can now deactivate one or more activities from a pipeline, allowing you to skip activities during pipeline validation and during pipeline runs. This will help to improve developer efficiency, allowing you to comment out parts of your pipeline without deleting anything from your pipeline canvas. Deactivated activities can be reactivated at any time. Learn more here.

Productivity

Category redesign of activities

We’ve redesigned the way activities are categorized to make it easier for you to find the activities you’re looking for with new categories like Control flow, Notifications, and more!

Copy runtime performance improvement

We’ve made improvements to the Copy runtime performance. According to our tests results, with the improvements users can expect to see the duration of copying from parquet/csv files into Lakehouse table to improve by ~25%-35%.

Integer data type available for variables

We now support variables as integers! When creating a new variable, you can now choose to set the variable type to Integer, making it easier to use arithmetic functions with your variables.

Pipeline name now supported in System variables.

We’ve added a new system variable called Pipeline Name so that you can inspect and pass the name of your pipeline inside of the pipeline expression editor, enabling a more powerful workflow in Fabric Data Factory.

Support for Type editing in Copy activity Mappings

You can now edit column types when you land data into your Lakehouse table(s). This makes it easier to customize the schema of your data in your destination. Simply navigate to the Mapping tab, import your schemas, if you don’t see any mappings, and use the drop-down list to make changes.

Data Activator

Data Activator is now in public preview

Data Activator reached a big milestone this month, with its release to public preview. This means Data Activator is now available to all Fabric users, without the need to sign up to be a preview user. You can use Data Activator to drive alerts and actions from your Fabric data. Want to try it out right now? Open a Power BI visual and select the “trigger action” menu option, to create a Data Activator alert: 

A screenshot of a computer

Description automatically generated

Data Activator also works with real-time streaming data in EventStreams. To make a Data Activator alert on your EventStreams items, create a “reflex” destination: 

A screenshot of a computer

Description automatically generated
To learn more about Data Activator, check out these links: 

Zugehörige Blogbeiträge

Microsoft Fabric October 2023 update

Oktober 7, 2024 von Alex Lin

Introducing Managed VNet Support for Fabric Eventstream! By creating a Fabric’s Managed Private Endpoint, you can now securely connect Eventstream to your Azure services, such as Azure Event Hubs or IoT Hub, within a private network or behind a firewall. This integration ensures your data is securely transmitted over a private network, enabling you to … Continue reading “Secure Data Streaming with Managed Private Endpoints in Eventstream (Preview)”

Oktober 4, 2024 von Jason Himmelstein

We had an incredible time in our host city of Stockholm for FabCon Europe! 3,300 attendees joined us from our international community, and it was wonderful to meet so many of you in person. Throughout the week of FabCon Europe, our teams published a wealth of valuable content, and we want to ensure you have … Continue reading “Fabric Community Conference Europe Recap”