Fabric November 2024 Feature Summary
Welcome to the November 2024 update for Microsoft Fabric!
This month, we’re excited to bring you a host of new features and improvements designed to enhance your experience and productivity. From the introduction of Copilot in Power BI mobile apps to the new Fabric Databases, there’s something for everyone. Whether you’re looking to streamline your data analysis, improve your reporting capabilities, or simply stay up to date with the latest innovations, this update has you covered.
To learn more, read about all these announcements, and more in Arun’s blog post Accelerate app innovation with an AI-powered data platform | Microsoft Fabric Blog
Be one of the first to use SQL database on Fabric
In this series on SQL database on Fabric, you will learn how Fabric brings together both transactional and analytical workloads, creating a truly unified data platform. You’ll also learn how developers can build reliable, highly scalable applications where cloud authentication and encryption are secured by default. Starting December 3rd, join us for six sessions with database experts and see just how easy it is to get started. Sessions are available live and on-demand. View the sessions and register for the series.
Don’t miss Microsoft Ignite 2024 and FabCon 2025
Attend Microsoft Ignite 2024 online for free November 19 – 21, 2024 to learn about the latest innovations in Data & AI. Join sessions that will cover solutions that help modernize and manage intelligent applications, safeguard data, and accelerate productivity.
Join us at FabCon Las Vegas from March 31 to April 2, 2025, for the ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. With more than 144 sessions, 18 pre- and post-conference workshops, unique community experiences, a dedicated pre-day for partners, all-day Ask-The-Experts hours, 20+ expo booths, plus after-hours events and socials you don’t want to miss.
Contents
- Be one of the first to use SQL database on Fabric
- Don’t miss Microsoft Ignite 2024 and FabCon 2025
- Certifications
- Copilot and AI
- Reporting
- Modeling
- Define new measure in DAX query view quick queries
- Metric sets: a new era of metric management in Fabric (Preview)
- Performance improvements for models with calculation groups and format strings in Excel
- DLP policies restrict access action for semantic models (Preview)
- Semantic modeling in Visual Studio Code with the new TMDL extension (Preview)
- Developers + APIs
- Other
- Platform
- OneLake
- Mirroring
- Databases
- Data Warehouse
- Data Engineering
- Data Science
- Real-Time Intelligence
- Ingest & Process
- Announcing the general availability of Real-Time Hub
- Announcing the general availability of Enhanced Eventstream
- Announcing the general availability of connector sources in Eventstream
- Introducing Azure Service Bus Connector for Eventstream (Preview)
- New Fabric events (Preview)
- Eventstream Data Preview on database CDC sources
- Monitoring experience on connector sources with Runtime Logs and Data Insights in Eventstream
- Processing and routing events to Activator with Eventstream (Preview)
- Introducing Eventstream’s CI/CD support
- Automate Eventstream Item Operations with Eventstream REST APIs
- Stream Data to Eventstream Securely using Entra ID Authentication
- Analyze & Transform
- Visualize & Act
- Ingest & Process
- Data Factory
- Table and partition refreshes added to semantic model refresh
- Import and export your Fabric Data Factory pipelines
- New connectors available
- Simplify data ingestion with Copy Job – CI/CD upsert & overwrite
- New capabilities in Copilot for Data Factory to efficiently build and maintain your Data pipelines
- OneLake datahub is now the OneLake catalog in Modern Get Data experience
- Dataflows now support CI/CD (Preview)
Certifications
Get certified in Microsoft Fabric – for free!
Get ready to fast-track your career by earning your Microsoft Certified: Fabric Analytics Engineer Associate certification. For a limited time, we will be offering 5,000 free DP-600 exam vouchers to eligible Fabric Community members. Complete your exam by the end of the year and join the ranks of certified experts. Don’t miss this opportunity to get certified.
Explore the newest Fabric certification for Data Engineers
We are excited to announce a brand-new certification for data engineers. The new Microsoft Certified: Fabric Data Engineer Associate certification will help you demonstrate your skills with data ingestion, transformation, administration, monitoring, and performance optimization in Fabric. To earn this Certification, pass Exam DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric, currently in beta.
Copilot and AI
Copilot in Power BI mobile apps (Preview)
We’re excited to announce the release of Copilot in Power BI mobile apps (Preview)! This new feature brings the power of AI directly to your fingertips, enhancing your mobile experience when you’re on the go, offering a quick and simple way to dive into your data.
With Copilot in Power BI mobile apps, you no longer need to analyze data yourself. Copilot provides report summaries and insights that allow you to make informed decisions anytime and anywhere. Imagine a sales manager effortlessly pulling up an executive summary of the latest sales report with a single tap, or a maintenance technician getting real-time machine-performance insights while on the factory floor.
To start using Copilot on your mobile app, simply tap the Copilot button located in the report header (that meets Copilot requirements in Power BI) and choose whether you want to get a summary or to look into insights. Copilot will deliver a response based on your request. You can then copy and share the response or keep interacting with Copilot by choosing from the suggestions at the bottom. These suggestions can help you tweak the response or create new requests.
For more details about Copilot in Power BI mobile apps, check out our full blog post.
Copilot summaries in subscriptions (Preview)
Do you need to extract insights from Power BI report images in your email or quickly digest a summary of your Power BI report? Subscribe to Copilot summaries for Power BI reports. This feature is available with Standard subscriptions and for reports in a copilot-eligible capacity.
Learn more about using Copilot in Power BI and Fabric.
Set up the copilot summaries for Power BI reports that you subscribe to as follows:
- Select the ‘Subscribe’ option from the ribbon for the Power BI report that you are interested in.
2. Select ‘Standard Subscription’.
3. You can subscribe to the report.
Learn more about creating report subscriptions.
4. Add a copilot summary to the email delivered by the subscription. If you are eligible, your subscription will receive the Copilot summary by default.
Learn more about setting up Copilot summaries for subscriptions.
5. You can ‘Preview summary’ to view a sample of what the summary might look like.
6. Test your subscription by selecting ‘Send Now’ after you Save the subscription.
Note: ‘Send Now’ will deliver the email with the copilot summary to all recipients.
Email Sample:
Learn more about Copilot summaries in subscriptions from our documentation.
This feature will roll out gradually over the next few weeks and is not available in Gov clouds.
Copilot and AI demo
Reporting
Path Layer for the Azure Map visual
This month we’re introducing a new feature to the Azure Map visual that takes geospatial analytics to the next level — the Path layer. The Path layer provides users with the ability to visualize geographic connections between multiple points. Whether you’re managing logistics, analyzing network traffic, or tracking asset shipment across the globe, this feature allows you to visualize connections between multiple geographic points in an intuitive and interactive way.
The Path layer is ideal for several key scenarios, for example:
Network Analysis: For industries like telecommunications, the Path Layer enables you to map intricate network connections. It helps identify inefficiencies, monitor data flow, and strengthen critical infrastructure.
Flight Path Analysis: Airlines can leverage the Path Layer to visualize and analyze flight routes, improving air traffic management. It helps identify new route opportunities and enhances the overall passenger experience by optimizing existing routes.
To get started, add the location for each point using either a geocoded location field, such as city names, or latitude and longitude. Then, differentiate between the paths by adding a field to the Path ID field well and indicate the order of connection through the Point Order field well.
For example, you could create a map showing the path of two ships with the latitude and longitude of their positions for each point, a path field with a unique identifier for each ship, and a timestamp for each location to make sure the points are connected in the correct order.
You can also format the visual by controlling the color, transparency and width of the lines, and even turning off the bubble markers for each point. If you turn off the bubble layer, you’ll only see a bubble on hover showing you the closest point to your pointer location. Paths are interactive as well, so you’ll get tooltips on hover and be able to cross-highlight other visuals by clicking on points of the lines.
There are a couple of unique behaviors to be aware of with this new layer. First, when using a drill hierarchy with the path layer, the visual will automatically drill down to the lowest level and will not allow you to drill up, as points in the path would be aggregated at higher drill levels. Next, if you have a location that’s part of multiple paths, the bubbles for that location show up on top of each other. If you want to click on the bubbles underneath, just hover on the line associated with its path, and it will float to the top and be selectable. Lastly, you can further break down the paths by adding a legend, which will create unique lines for each legend value of a given path ID.
An additional point to consider: Currently, the path layer operates mainly in conjunction with the bubble layer. Once you add a path to your map, you’ll see that the filled, cluster bubbles, heat map, and 3D column layers are all disabled. Additionally, while you can use the path layer in conjunction with reference layers, the reference layer will be static. It’s currently unsupported to mix data bound reference layers with the path layer.
The path layer is still actively rolling out to all regions. Depending on what region your tenant is in, you might not see the path layer in the Power BI service through the weekend. Be sure to check the report after publishing, and if you don’t see the layer, it should be accessible within a week.
We’re excited to see what you create with this new path layer. Give it a try and let us know what features you’d like to see next!
Visual calculations (Preview)
The work on visual calculations continues as usual and this month we are adding a highly requested item: support for exporting! You can now export data from visuals that contain one or more visual calculations or hidden fields.
If you export data, hidden fields on a visual are not included in the export, unless you export the underlying data. The results of visual calculations are always included in the export, except when exporting underlying data, since visual calculations are not part of the underlying data as they only exist on the visual.
Learn more about visual calculations in our documentation.
‘Set Alert’ with Activator and Real-Time Intelligence (Generally Available)
Back in December we announced the preview of alerting capabilities within Power BI reports using Real-Time Intelligence Activator, part of Microsoft Fabric. We are excited to announce that this capability is now generally available!
With GA, you’ll be able to:
- Stay on top of your critical metrics by monitoring your business objects. You can track and analyze key business objects such as individual packages, households, refrigerators, and more in real-time, ensuring you have the insight needed to make informed decisions. Whether it’s understanding how individual instances of your business objects impact sales figures, inventory levels, or customer interactions, our monitoring system provides detailed insights, helping you stay proactive and responsive to changes in your business environment at a fine-tuned level of granularity.
- Unlock the full potential of creating business rules on your data with advanced data filtering and monitoring capabilities. This update offers a wide array of options for filtering, summarizing, and scoping your data, allowing you to tailor your analysis to your specific needs. You can set up complex conditions to track when data values change, exceed certain thresholds, or when no new data has arrived within a specified timeframe.
- Ensure your communications are perfect before hitting send by seeing a preview of your Email and Teams messages. This will allow you to see a preview of your message exactly as it will appear to the recipient. Review your content, check formatting, and make any necessary adjustments to ensure clarity. With this feature, you can confidently have Data Activator send messages on your behalf knowing they look just the way you intended.
- Set up rules that trigger automatically with every new event that comes in on your stream of data. Whether you need to send notifications or initiate workflows, this feature ensures that your processes are always up-to-date and responsive.
- We renamed our feature to help create clarity about what it is and what it does. If you are used to seeing Reflex, please note that it is now called Activator. The items you create to set up rules and actions are, therefore, activators.
- We also enabled capacity usage reporting, to help you better understand your capacity consumption and future charges. Our billing is based on Storage used for events retention, and Compute resources: the number of rules running, the number of events per second ingested, rules evaluation and activation.
For more on Activator meters and billing stay tuned for the detailed RTI Billing Blog post coming soon.
You can learn more about the updates in GA through our blog. We continue to improve our capabilities, and we’d love to hear your feedback. Please share your ideas or suggestions.
Small multiples for the new card visual (Preview)
With this month’s update, we’re enhancing the Card visual with a new version that retains all familiar features and updates, while adding advanced functionality and an improved user experience with small multiples.
This new feature is currently in preview with the new Card visual, offering an excellent opportunity to experience the capabilities of the feature.
Small multiples are a series of similar card tiles displayed together in a grid format, each representing a different category or dimension of data, allowing for easy OKR comparison and analysis across multiple fields.
This newly added feature enhances data organization, visual clarity, and performance, making it easier to analyze and present data effectively. To try it, navigate to Options and settings > Options > Preview features > New card visual, and make sure it’s enabled.
Another advantage of the new Small multiples feature is the extensive customization it offers, including:
- Small multiples layout: Choose from single column, single row, or grid, and customize the number of small multiples, rows, or columns displayed.
- Advanced formatting options: Enhanced features such as font styles, color-coding, and conditional formatting.
- Border and gridlines: When enabled, individual controls for borders and gridlines permit the customization of style, width, color, and transparency.
- Overflow style: Options include continuous scroll or paginated, to smoothly navigate through multiple cards without overwhelming visual space.
- Headers: Choose from horizontal or vertical orientation, top or left position, customizable alignment, font, color, transparency, padding, plus background color or image.
To create a card visual with Small multiples, first select the Card (new) icon from the visual gallery on the Build visual tab in the Visualizations pane, then select some data fields from the data model to add them to the data field well.
To categorize your cards using small multiples, choose a data field from the data model and add it to the Small multiples data field well.
This new feature provides extensive customization options, such as layout, advanced formatting options, conditional formatting, borders and gridlines, overflow style, and customizable headers.
Small multiples for the Card visual in Power BI offer another great enhancement that significantly improves data organization, visual clarity, and performance.
The Core Visuals team continues to add new features and greater functionality, and we’re committed to improving our capabilities. We invite you to explore this new feature and share your feedback with us in the comment section below as we continue to improve our Card visual capabilities.
For more information we encourage you visit the Core Visuals blog on LinkedIn.
New visual – text slicer (Preview)
Introducing the new text slicer, now available in our core visuals gallery.
This month brings the arrival of the new text slicer in Power BI offering new possibilities for both users and the organization.
Enable the new text slicer by navigating to Options and settings > Options > Preview features > text slicer visual to ensure its selected, and restart Power BI.
The text slicer works by allowing users to input specific text that acts as a filter, targeting a designated data field. By entering the desired text in the slicer’s input box, the slicer effectively narrows down the dataset to display only the relevant information that contains the entered text. This functionality is particularly useful for handling large datasets, where quick and precise filtering is essential for efficient data analysis and presentation.
To create a text slicer visual, select the text slicer icon from the visual gallery on the Build visual tab in the Visualizations pane. This adds a visual placeholder to the report canvas.
To filter a dataset, add a text field from the data model to the Field well to establish the text slicer’s functionality, allowing it to filter the dataset based on user input. Simply add text to the slicer’s input box, select the apply icon, or press enter, and the slicer immediately filters the dataset, displaying results on the visual.
As shown here, the new text slicer introduces a powerful and customizable filtering tool in Power BI:
- Improved user experience: The text slicer provides users with a straightforward and efficient method to filter input.
- Unmatched customization: It offers numerous options for users to tailor their filter experience to their needs and preferences.
The Core Visuals team is dedicated to enhancing our features and functionality continuously. We are committed to advancing our capabilities and highly value your feedback. Kindly share your insights regarding this capability in the comments section below.
For more information we encourage you visit the Core Visuals blog on LinkedIn.
Reporting demos
Modeling
Define new measure in DAX query view quick queries
Creating measures in DAX query view just became even easier. The quick queries option available from the context menu of tables, columns, or other items in the Data pane, now includes Define new measure.
This will create a new query table with the syntax started for you to create a query-scoped measure DAX formula ready for you to add your own DAX formula and then run when ready.
Learn more about DAX query view and the other quick queries available at DAX query view – Power BI | Microsoft Learn.
Metric sets: a new era of metric management in Fabric (Preview)
The preview of metric sets is now officially available for both service and desktop. This is a transformative new feature designed to redefine how organizations manage and consume metrics.
The Fabric Metric Layer’s home base is the Metrics Hub in Power BI and brings powerful capabilities to streamline metric management, ensure consistency, and foster trust in data across your organization.
Metric sets will be available for both consumers to browse, and creators to use in reporting. A service experience consisting of visualized metrics and data exploration will allow end users to answer their data questions. The desktop experience will allow creators to connect to the most authoritative metrics to visualize in reports.
Key Features:
- Curated Collection of Metrics: metric sets will serve as a collection of measure pointers to source semantic models and include key dimensions so end users and authors alike can unambiguously understand how a metric should be grouped or used.
- Rich Consumption Experiences: Users can explore and consume metrics from the metric set itself, allowing for deep insights and understanding. Copilot summaries and multiple visuals will be available for users to scroll through and go from data to insights in seconds.
- Efficiency: Consumers no longer need to rely on report creators to answer questions or build custom reports for specific needs. Consumers can leverage the Explore dialog to dig deeper into a given metric in an environment where everything in the data pane ‘just works’ because the dimensions have been curated specifically for the metric.
- Discoverability and Reuse:
- Consumers – Metrics are discoverable via search, and metric sets can be promoted, endorsed, certified just like any artifact so that users trust it. Consumers can also leverage the Explore dialog to dig deeper into a given metric in a safe environment where everything in the data pane ‘just works’ because the dimensions have been curated specifically for the metric.
- Authors- Metrics in Desktop: In the November release of desktop, metric sets will be available to connect to and use in desktop reporting. You can access the metric you want to include in your model via OneLake datahub / data catalog and connect there. This ensures your reports use the most up to date and authoritative measures available.
Stay tuned for the upcoming milestones and get ready to transform your metric management experience with Metrics Hub!
Performance improvements for models with calculation groups and format strings in Excel
We’re excited to announce significant performance improvements for MDX queries on models with Calculation Groups and Format Strings!
The latest changes should greatly improve the performance and reliability of operations in Analyze in Excel on models that include one or both of:
- Dynamic Format Strings for Measures.
- Calculated Items with Format Strings.
This extends to other MDX scenarios as well, so all client applications that use MDX to query semantic models with the above will experience the same performance benefits.
DLP policies restrict access action for semantic models (Preview)
Purview data loss prevention policies for Fabric now enable admins to restrict access based on the sensitive information detected within their semantic models’ data.
When Purview compliance admins configure DLP policies for Fabric, they now have the option to decide if upon detecting sensitive information they would like to block access to the data. They have the option to prevent guest users from accessing the data or to restrict access for all users except the data administrator.
In Fabric, data admins will see an indication that their data is restricted, and can act, such as reporting an issue to the compliance admin or override the policy rule.
Consumers, such as guest users who have been now restricted from seeing this information, also see an indication letting them know that an organization policy revoked their access, and if they attempt to see its content, they will not be able to.
With restrict access action, compliance admins get further control and enforcement when uncovering sensitive data in their Fabric tenant.
Semantic modeling in Visual Studio Code with the new TMDL extension (Preview)
Power BI developers, the new TMDL Extension for Visual Studio Code in public preview enhances your TMDL editing experience, boosting semantic model development.
The Tabular Model Definition Language (TMDL) is designed to make model representations readable, editable, collaborative, and reusable. The TMDL Extension builds on these strengths of TMDL with several key features that create a rich development experience:
- Semantic Highlighting: Improves readability by applying different colors to parts of your code based on meaning, making it easier to understand the structure and functionality of your TMDL briefly.
- Error Diagnostics: Helps you identify and fix issues in your code by clearly highlighting errors and providing you with detailed messages that guide you on how to resolve them.
- Autocomplete: Offers intelligent suggestions while you type to speed up your workflow, reduce the chance of errors, and help you understand your code options
- More features on the way!
By working in Visual Studio Code, you can also take advantage of other fantastic tools on the platform such as:
- Source Control: Seamless integration with Git, allowing you to track changes, collaborate with team members, and version control your semantic models.
- GitHub Copilot: An AI coding assistant that will help you write code faster, generate TMDL from natural language, and quickly apply advanced bulk edits to your models.
Download the TMDL Extension on the Visual Studio Marketplace and see how you can accelerate your semantic model development today!
Modeling demos
Developers + APIs
Fabric Git: TMDL format for semantic model export
As part of our commitment to providing a developer-friendly experience that enhances team collaboration, Fabric Git integration will begin exporting semantic model definitions as Tabular Model Definition Language (TMDL) in January 2025. This change will replace the use of a single JSON file (model.bim) with Tabular Model Scripting Language (TMSL).
Due to its folder representation and readable format, TMDL offers a significantly improved source control experience. This enhancement facilitates tracking commit history and simplifies the resolution of merge conflicts, particularly when compared with TMSL.
If necessary, you can continue to obtain the TMSL representation of your semantic model by using the Get Semantic Model Definition REST API or XMLA Endpoint.
Semantic model client library updates
Client applications, such as Excel or Power BI Desktop, connecting to Power BI semantic models now benefit from better performance due to an automatic conversion of legacy connection strings (e.g. pbiazure://*) to the XMLA endpoint. Requests are routed directly through the XMLA endpoint, reducing intermediary steps, speeding up request processing, and decreasing the likelihood of errors.
You may need to update your firewall rules. See the troubleshooting document for details.
Please ensure that you are using the latest Analysis Services client libraries for optimal performance when connecting to Power BI semantic models.
Visualizations
KPI by Powerviz
KPI by Powerviz (Power-BI Certified) is a powerful custom visual for Power BI that allows users to visualize and create eye-catching and advanced Key Performance Indicators (KPIs).
Key Features:
100+ Prebuilt KPI templates within visual and option to create own templates.
Design:
- 16 layers and 40+ chart variations to create infographic designs.
- Rich customization, formatting options, and color styles.
- Create KPI objects in layers, combining charts, metrics, and icons.
Analytical:
- Data Visualization Types:
Categorical: Compare values across categories.
Comparison: Analyze differences between values.
Composition: Show parts of a whole.
Progression: Display trends over time.
Actual vs Target: Compare actual against targets.
- Formatting Features: Configure the Ranking, Sorting, Axis, Number-Formatting, Tooltip, Gridlines, Data Labels and Series Labels for visuals.
- IBCS Theme Support: Includes deviation bars, series labels, and consistent color scheme.
- Small Multiples: Support for all chart types – Fixed/Fluid with change chart feature.
Other features include multi-categories comparison, Highlight values, Layer Flexibility, and more.
Business Use Cases:
Sales Performance, Financial Health, Customer Satisfaction.
- Try KPI Visual for FREE!
- Check out all features of the visual
- Step-by-step instructions
- YouTube Video Link
- Learn more about visuals
- Follow Powerviz
Zebra BI Tables 7.3
With Zebra BI Tables 7.3, users can harness the power of a rich text editor to create and update visual comments with remarkable efficiency. This feature empowers you to style and format your text, add bullet points, and insert hyperlinks, making your report a one-stop shop for the entire team by just leaving links to reports and documentation different stakeholders might have an interest in. Well-structured comments can streamline communication within your reports, enabling readers to quickly grasp essential insights.
By emphasizing what’s important and explaining why it matters, you guide your audience towards critical information and promote clarity and understanding. This clarity is crucial in any business environment, where time is often limited, and strategic decisions must be made swiftly. Effective comments reduce the time and effort required to generate actionable insights, which ultimately improves report quality and effectiveness.
Incorporating thoughtful commentary can transform a standard report into a powerful tool for decision-making. With Zebra BI Tables, enhancing your reports with meaningful comments has never been easier — all so you can communicate your message more effectively and engage your audience better.
Learn more from our video example of the rich text editor in Zebra BI Tables 7.3.
Waterfall PRO by ZoomCharts: the most interactive waterfall visual for financial data
Waterfall PRO by ZoomCharts is the most user-friendly and insightful way to visualize financial data, combining incredible user experience with customizability and powerful features. It also seamlessly cross-filters data across multiple visuals, allowing you to create truly interactive Power BI reports.
Main Features:
- Custom Sequence: Have full control over the column order with the Sequence field.
- Drill Down: Use multiple categories to enable drill down directly on the waterfall chart.
- Automatic Subtotal Calculation: Display subtotals even if you don’t have them in your data.
- Rich Customization: Customize X and Y axes, legends, tooltip content, and adjust the appearance settings for positive, negative and total columns separately
- Thresholds: Display up to four constant or dynamic thresholds as lines or areas.
- Cross-chart filtering: Dynamically filter data across multiple visuals.
Get Drill Down Waterfall PRO on AppSource
Lollipop bar chart by Nova Silva
We’re thrilled to continue receiving your valuable feedback, and we appreciate your contributions in helping us improve our visuals.
In our latest Lollipop Bar Chart release for Power BI, we’ve added a much-requested feature: secondary markers. This allows you to display not only the primary value but also add context by including a secondary value marker.
This new feature integrates seamlessly with all other Lollipop Bar Chart functionalities, such as transforming the Lollipop Bar Chart into a dot plot by removing the connecting bars, as shown in the second image. This also removes the requirement to start your numeric scale at 0, allowing you to have a closer look at the values and their differences.
While standard bar charts are great for comparing a single measure across categories, they can become cluttered with larger datasets (>10 categories). The colored bars may fill too much of the chart space. To address this, the Lollipop Bar Chart offers a cleaner, more efficient alternative, minimizing clutter without sacrificing clarity.
Try the Lollipop Bar Chart for FREE now on your own data by downloading it from the AppSource.
Questions or remarks? Visit us at: https://visuals.novasilva.com/.
Sales velocity chart
The Sales Velocity chart is a unique tool for analyzing product sales and profitability in specific countries. It uses a combination of pie charts, needles, and color coding to visually represent key metrics.
Key Features:
- Needles: Length indicates sales percentage; width reflects profit margin.
- Pie & Circle Size: Reflects overall current sales in a country.
- Color Coding: Green (high profit), Yellow (moderate), Red (low profit).
- Sales Trend Dot: Gray (no data), Red (decreasing sales), Green (increasing sales).
Benefits:
- Visual Clarity: Easy to understand data representation.
- Dynamic & Scalable: Handles large datasets and adapts to screen size.
- Interactive Features: Tooltip displays details, premium options offer filtering and logo removal.
Use Cases:
- Businesses can identify top sales regions and areas needing improvement.
- Financial analysts can pinpoint high and low profit contributors.
Note:
For more information, visit our website.
Read this Sales Velocity Chart documentation.
For any queries, questions, or requests, please write to us.
Donut Chart by JTA
An innovative visualization tool that segments data into three clear categories: Positive, Neutral, and Negative. This format is particularly effective for sentiment analysis, offering clear insights into the overall distribution of opinions or data points.
Enhance your data visualization effortlessly with this versatile tool.
Key Features:
- Personalize Colors: Tailor the look of your chart by adjusting the color scheme of each slice to reflect your brand or style.
- Customize Text: Make it uniquely yours! Modify titles, legends, values, and percentages, adjusting margins, colors, fonts, and alignments to perfectly match your design preferences.
- Shape the Visual: Personalize the entire chart—adjust the circumference, tweak the colors, and refine the overall look and feel to suit your needs.
- Target Comparison: Easily compare your metrics against specific targets for clearer insight.
- Icon Customization: Set your own indicators! Choose custom icons to represent performance below or above your target.
- Conditional Formatting: Effortlessly apply color-coded formatting to highlight how values measure up against their target.
- Animation Control: Smooth transitions! Enable or disable animations to enhance or streamline your visual experience.
Download Decomposition Tree by JTA for free: AppSource
Try Decomposition Tree by JTA: Demo
Learn more about us: JTA The Data Scientists
New book: Data Visualization with Microsoft Power BI
We recommend the new book ‘Data Visualization with Microsoft Power BI’ by Alex Kolokolov & Maxim Zelensky, the first book that delivers DataViz best practices for Power BI!
- 25 chapters about different chart types.
- 40 visuals: from default to advanced from the AppSource gallery.
- 400 color pages of an exceptional quality.
The book is suitable for non-technical professionals as well as for experienced data analysts, it consists of 3 parts:
- Classic Visuals – Authors explain how to choose charts for basic types of analysis and avoid common mistakes. How to set up interactions and put visuals together on a dashboard.
- Trusted Advanced Visuals – Different options and data requirements for waterfall and bullet charts, Gantt, tornado, funnel, Sankey, etc.
- Risky Advanced Visuals – ‘Eye-catching’ charts that may confuse the average user. We explain use cases and offer simpler alternatives.
Book features:
- Beautiful examples, specific use cases for charts.
- Step-by-step guides on how to set it up in the app.
- Data preparation tips and tricks.
- Quizzes to consolidate the learning material.
“I want to inspire people to use Power BI for more than just reporting. I want them to create brilliant dashboards and tell interactive data stories!” – Alex Kolokolov
The book is now available on Amazon.
Other
Support for Power BI language settings when a paginated report is viewed on the Power BI service
When a localized paginated report is published to the Power BI service, the viewer of the report will now see the report in the preferred language that they have selected in the Power BI/ Fabric Settings page. Previously, the rendering of the report was determined by the server settings.
Learn more about viewing localized paginated reports on the Power BI service.
Platform
Introducing OneLake catalog
OneLake catalog is the next evolution of the OneLake data hub. Providing a unified experience, where data engineers, data scientists, analysts, and decision-makers can browse, manage, and govern all their data from a single, intuitive location. The OneLake catalog now includes various item types in Fabric, such as dashboards and reports (available by the end of November), dataflows, pipelines, and more.
Streamlined for collaboration
OneLake catalog offers filtering capabilities to help users find specific items efficiently. Business users can uncover reports and dashboards to answer their questions, while analysts can explore data items and processes for deeper analysis.
In-place data management
OneLake catalog allows you to view and manage any item directly within the catalog itself, simplifying navigation and enhancing efficiency. This contextual management ensures that you can handle your data ecosystem more effectively.
In-depth item metadata
Clicking on any item in the OneLake catalog reveals relevant metadata, including descriptions, tags, endorsement and sensitivity labels. The catalog also provides a granular view of your data items schemas and objects (e.g. warehouse tables and views), enabling better insight and control.
Unified management and governance
OneLake catalog combines crucial functionalities such as cross-workspace item lineage, access permissions, and real-time activity monitoring within a single interface. This unified approach makes governance and management tasks more accessible for every user.
Explore the catalog today
Explore OneLake catalog to experience the future of data management in Fabric. OneLake catalog is available in more than 40 scenarios where users connect to data within Fabric. It is also accessible in services and applications outside of the Fabric service application, such as Power BI Desktop and Azure Ibiza, with plans to expand to Excel soon. To learn more about this update, find detailed information that covers all the features and benefits of the catalog in depth.
Tenant switcher control
The tenant switcher is now available in the Fabric portal. Users with access to more than one Fabric tenant can easily switch between tenants directly from the account manager in the top right corner of the Fabric portal. This is in addition to the existing From External Orgs tab that can be found in the home page of the Power BI experience.
Automate GitHub integration with Microsoft Fabric REST APIs
Introducing the new REST APIs for Git integration with GitHub! These APIs enable you to automate Git integration tasks, such as connecting to GitHub, retrieving connection details, committing changes to your connected GitHub repository, updating from the repository, and more.
For more information about the APIs and find available code samples.
Switch branches from the source control pane
Switching connected Git branches is now available directly through the source control pane. All branching actions can now be accessed in one place within the branches tab.
This allows you to:
- Branch out to new workspace to create a new workspace with a new connected branch.
- Checkout branch to create a new branch while keeping the current workspace state, useful for resolving conflicts.
- Switch branch to replace the current workspace content with another branch, new or existing.
Learn more details for these actions.
Announcing general availability of the Fabric Workload Development Kit
The Microsoft Fabric Workload Development Kit is now generally available. This feature allows Fabric to extend additional workloads and offers a robust developer toolkit for designing, developing, and interoperating with Microsoft Fabric using frontend SDKs (Software Development Kits) and backend RESTful APIs (Application Programming Interfaces). See feature blog to learn more.
This release includes new features and enables users to start using Partner Workloads, which will be available in the workload hub in the coming weeks.
OneLake
External data sharing is now generally available
The external data sharing feature announced earlier this year is now generally available. External data sharing enables the sharing of OneLake tables and folders across tenant boundaries. In the current release, each share may include a single folder or a table from a Lakehouse. In the coming releases, support will be added for multiple folders and tables in a single share as well as sharing from Warehouses and Eventhouses.
For more information check out the documentation.
Mirroring
Introducing Open Mirroring
Introducing Open Mirroring, our new Mirroring capability. When we created Microsoft Fabric, we designed our platform to be extensible, customizable, and open. With that in mind, Open Mirroring, now in preview, is a powerful feature that enhances Fabric’s extensibility by allowing any application or data provider to bring their data estate directly into OneLake with minimal effort.
By enabling data providers and applications to write change data directly into a mirrored database within Fabric, Open Mirroring simplifies the handling of complex data changes, ensuring that all mirrored data is continuously up-to-date and ready for analysis.
For those looking to expand their data processing and analytics capabilities within Microsoft Fabric, Open Mirroring brings a flexible and powerful solution to ensure your data remains in sync, accessible, and analytics-ready within OneLake.
Our Fabric partners such as Striim, OCI Golden Gate, and MongoDB already have capabilities to integrate with Open Mirroring, with DataStax integration coming soon. This enables any organizations to leverage a broader ecosystem of tools, enriching their data processing and analytics within the Fabric environment.
Learn more about Open Mirroring in the Introducing Open Mirroring in Microsoft Fabric blog post.
Fabric Database Mirroring Public REST APIs are now generally available
Announcing the general availability of Fabric Databese Mirroring Public Rest APIs. Users can now utilize Microsoft Fabric REST APIs to perform CRUDLE operations:
- Create a new Mirrored database in your Fabric workspace.
- Read existing Mirrored database to get the definition of the item.
- Update Mirrored database with changes to the definition.
- Delete existing Mirrored database to clean up your workspace.
- List all Mirrored database in a workspace to get all available mirrored database in your workspace.
With a Mirrored database ID, you can also get additional status for the Mirrored database and its tables mirroring status. In addition, you can start and stop existing mirrored databases with public REST APIs as well.
To learn more read mirrored database REST API.
Mirroring for Azure SQL Database now Generally Available
Mirroring for Azure SQL Database is now generally available. Mirroring is a simple, free and frictionless way to replicate a snapshot and incremental data changes from Azure SQL database to Fabric OneLake with data sync in near-real time.
With the GA release, the following new features are now available:
- Support for Truncate Table in source database when Mirroring is active
- Address issues related to schema hierarchy and column mapping in Data Warehouse and Lake House experience
To learn more Announcing the general availability (GA) of Fabric Mirroring for Azure SQL Database.
Introducing Mirroring for Azure SQL Managed Instance (Preview)
The Preview of Mirroring for Azure SQL Managed Instance is now available. Mirroring is a simple, free and frictionless way to replicate a snapshot and incremental data changes from Azure SQL Managed Instance to Fabric OneLake with data sync in near-real time.
Fabric Mirroring offers a great alternative to running a project to set up an ETL process to enable insights into an operational database that is the subject of an analytics scenario.
You can set up Fabric mirroring in just a few steps, choose tables to mirror and the data will start flowing. To make changes to which tables are mirrored, you just need to make a few clicks. At any point, it is easy to see the status of replication for all mirrored tables. All this setup, management and monitoring is integrated directly into Fabric UI.
Before mirroring, ETL setups would require additional tooling for data replication, people expertise to set up, configure, monitor and maintain the ETL- and this is just to keep the replication going. Any changes to replication would again require queuing up and waiting for ETL experts to modify your pipelines.
To learn more about this new and exciting capability of Mirroring for Azure SQL Managed Instance in Microsoft Fabric, please read more in the blog.
Databases
Introducing Fabric SQL database (Preview)
SQL database is now available as a native solution in Microsoft Fabric (Public Preview) and is the first database offering to land in the new databases workload. This new offering is seamlessly integrated with the Fabric platform and includes unified billing through the Capacity units (CU) model. Whether you are working on small or large analytics projects, we have heard your feedback: you need database support in Microsoft Fabric.
With the addition of Fabric databases, we are evolving Microsoft Fabric from an analytics platform into a data platform. Fabric now has everything you need for your GenAI apps: operational database support, analytical storage, real-time intelligence for data in motion, and top-tier visualization. This week, we also announced the preview of a new vector type and functions in Fabric SQL database and Azure SQL Database, making building AI apps much simpler. We have samples for how you can easily integrate with frameworks like LangChain, Semantic Kernel, and more.
You can get started with SQL database in Fabric today. For more information, please see the Announcing Fabric SQL database Preview.
Data Warehouse
Cold query performance improvement
Running a query with a cold cache presents several challenges. When data is not cached, it must be fetched from OneLake and transcoded from parquet file format structures into in-memory structures for query processing. This process can be time-consuming and impact overall performance.
With our latest improvement, we have optimized both fetching data from the storage and the transcoding process, observing median cold query overhead reduction of 40%.
Service principal support for Fabric Data Warehouse
We’ve made a major enhancement in the way you can authenticate and manage your Fabric Data Warehouses: the introduction of service principal (SPN) support. This new feature empowers developers and administrators to automate processes, streamline operations, and increase security for their data workflows.
Earlier we launched service principal support for various Microsoft Fabric items, including Lakehouses and Eventhouses. Now, this support extends to Fabric Data Warehouses, making it easier to connect, manage, and deploy warehouse solutions in a secure, scalable way without needing to rely on user identities.
The feature provides the following benefits:
- Automation-friendly API Access: You can now create, update, read, and delete Warehouse items via Fabric REST APIs using service principals
- Seamless Integration with Client Tools: You can use tools like SQL Server Management Studio (SSMS) to connect to your Fabric Data Warehouses using service principals and run TSQL features like COPY INTO.
- Granular Access Control: Ability to provide granular-level access by using T-SQL commands like GRANT, administrators can assign specific permissions to service principals to control precisely which data and operations an SPN has access to.
- Improved DevOps and CI/CD Integration: By using service principals, developers can automate the deployment and management of data warehouse resources in their DevOps and Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure rapid and reliable delivery of data solutions.
Data Engineering
Notebook display chart upgrade
The new and improved chart view is the latest enhancement to our notebook display. This update is designed to provide a more intuitive and powerful experience for visualizing your data by leveraging the built-in visualization tool on Fabric Notebook.
Key Features:
- Multiple charts view: Now you can add up to 5 charts in one display() output widget, allowing you to create multiple charts based on different columns, and compare charts easily!
- Rich chart recommendation: Get chart suggestions when creating new charts or clicking the suggestion button, easy to get started with the rich chart template and summarized title and insightful recommendations of key-value pairs.
- Advanced Chart Editing: You can add, rename, delete charts, and configure chart options. A lot of new configurations are provided in this upgrade, like chart title and subtitles, legend, theme, label etc. All your configurations are saved immediately.
- Global Configuration: Easily filter and apply custom ranges to your data. These settings will be applied to both tables and charts.
- Interactive Toolbar: Hover over a chart to access a toolbar for exploring the chart, like zoom in, zoom out, select to zoom, reset, panning, etc. Toolbar settings won’t be saved, allowing for temporary adjustments.
Benefits of the Enhanced Chart View:
- Improved Data Visualization: The new chart view offers a more dynamic and interactive way to visualize your data, making it easier to identify trends and insights.
- User-Friendly Interface: The enhancements provide a seamless experience, allowing you to switch between table and chart views effortlessly.
- Customization Options: With the ability to configure chart options and apply global settings, you can tailor the visualizations to meet your specific needs.
- We’ll gradually add more advanced chart types based on the new UX framework, stay tuned!
Getting Started:
To access the Enhanced Chart View, just open your Fabric notebook and run the display(df) statement. If you’re seeing the legacy UX, use the switch to go to the new UX.
Fabric API for GraphQL is now generally available with exciting new features
The Microsoft Fabric API for GraphQL is now generally available, marking a significant milestone in providing powerful, flexible, and efficient data access APIs in Fabric.
In addition to important features made available last month (Service Principals support and code generation from the API editor), this release introduces several new capabilities aimed at enhancing your experience and expanding the possibilities of what can be achieved with your GraphQL API in Fabric, making it easier to harness the power of Fabric data in your business applications.
- New data sources: Azure SQL and Fabric SQL DB (Preview) integration for seamless data access.
- Access data sources with connections and saved credentials: Enhanced security and simplified access management.
- Logging and Monitoring Dashboard: Visual insights into API activity and detailed logging for better performance monitoring and troubleshooting.
- CI/CD Support: Git Integration and Deployment Pipelines for consistent and automated deployments.
You can find more information about these exciting new features in our GA announcement blog.
Esri’s ArcGIS GeoAnalytics integration with Fabric Spark (Preview)
Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis and location insight.
Microsoft and Esri have collaborated to integrate spatial analytics in Fabric, with a preview set to launch soon. Our collaboration with Esri will introduce cutting-edge visual spatial analytics right within Microsoft Fabric Spark notebooks and Spark job definitions (across both Data Engineering and Data Science experiences).
With its integrated product experience, it empowers Spark developers or data scientists to natively use ArcGIS capabilities to run GeoAnalytics functions and tools within Fabric Spark for transformation, enrichment, and pattern / trend analysis of data across different use cases without any need for separate installation and configuration.
Here is an example to transform the data with ArcGIS spatial function to uncover the pattern of interest, for instance summarizing the total number of policies of insured properties by hexagonal bins.
Here is another example to understand the impact of natural hazards or current events on insured properties by bringing a dataset with probabilities of hurricane force winds and spatially joining it with insured properties. Spatial join links insured properties with wind speed probabilities, and with that for each property we would know the likelihood of hurricane force winds and can run predictive models to assess potential insurance claims.
To learn more about ArcGIS GeoAnalytics integration within Microsoft Fabric Spark, please refer to the documentation.
Jar libraries are now supported in Fabric Environments
Java Archive (JAR) files are a popular packaging format used in the Java ecosystem. They allow developers to bundle multiple files—such as Java class files, metadata, and resources—into a single, compressed archive for distribution. JAR files simplify the distribution and execution of Java applications and libraries by consolidating everything into one file, which can be easily shared and managed.
Previously, integrating JAR files into Fabric requires inline commands within notebooks. This approach, while functional, posed a challenge of reproducibility. And now, you can upload your JAR files as the custom libraries to the Environment. These custom libraries will be effective in the Notebooks and Spark jobs once attached to the Environment. Embracing JAR files within Fabric Environments can streamline your development and deployment processes, enhancing the overall efficiency and scalability of your applications.
Support of spaces and special characters in Delta table names
Support of spaces and special characters in Delta table names in Microsoft Fabric is now available!
This is a highly desired enhancement requested by Fabric customers. Now, you can name Delta tables using spaces, special characters and the encoding of your natural language in all Fabric experiences. Everything will work: Spark, Lakehouse, Notebooks, Warehouse, Power BI, Shortcuts creation, Metadata discovery, etc. Some restrictions apply, learn all about it in the documentation. The feature will be available over the next weeks across all Fabric regions worldwide.
Data Engineering demos
Data Science
Introducing low code AutoML
AutoML, or Automated Machine Learning, is a process that automates the time-consuming and complex tasks of developing machine learning models. It simplifies the workflow by handling data preprocessing, feature engineering, model selection, and hyperparameter tuning, allowing users to focus on interpreting results and making decisions.
We are introducing the new low code AutoML user experience in Fabric, designed to empower analysts and data scientists to quickly prototype and build machine learning models with ease. This innovative interface supports a variety of tasks, including regression, forecasting, classification, and multi-class classification.
Getting started with the AutoML user experience is incredibly simple. Users can begin with an existing experiment, model, or notebook. All it takes is selecting the relevant files or tables from your lakehouse and specifying the desired ML task. For those who want more control, there are optional configurations available. You can choose your parallelization mode, deciding whether to train one Spark-based model at a time or to parallelize trials with Pandas by distributing them across all nodes on your Spark cluster. Additionally, the auto-features setting enables us to generate useful features for model training.
One of the key features of the AutoML experience is its integration with MLflow. All generated models are tracked using MLflow and the existing Experiment items. This allows users to monitor all the details, such as metrics, parameters, model types, and model files, making it easy to compare different models generated from the AutoML trial.
Learn more about Automated Machine Learning in Fabric.
Data Science demo
Real-Time Intelligence
Real-Time Intelligence is now generally available (GA)! Announced at Build 2024, Real-Time Intelligence includes a wide range of capabilities across ingestion, processing, analysis, transformation, visualization and taking action. All of this is supported by the Real-Time hub, the central place to discover and manage streaming data and start all related tasks.
This month includes a wide range of improvements, read on for more information on each capability and stay tuned for a series of blogs describing the features in more detail.
Please submit any feedback on our features at RTI ideas.
Ingest & Process
Announcing the general availability of Real-Time Hub
Fabric Real-Time Hub is now generally available! The one enterprise-wide catalog that enables users to discover, connect to, explore and act upon streaming data & events from anywhere. Seamless integration with all Real-Time Intelligence services like Fabric Eventstreams, Eventhouse and Activator greatly accelerates time to insights.
Fabric Real-time Hub was originally released to Public Preview at //Build 2024 and has since become one of the most broadly adopted features within Real-time Intelligence suite. At the same time, our customers continue to give us valuable feedback to make Real-Time Hub even better. And we are listening!
Here are some of the recent improvements:
What’s new?
- Simplify Azure Event Hubs source connection: We have simplified the experience when connecting to an existing Azure Event Hub. For users with the right permission to access the available Azure Event Hubs, a single click is all it takes for Fabric to automatically establish the connection to the source
- New Sources: Azure Service Bus, Apache Kafka, CDC from SQL Server on VM DB and CDC from Azure SQL Managed Instance are added to the “Connect data source” options.
- Rich Sample Scenarios: For users who are new to Fabric Real-Time Intelligence, we provide three streaming data samples for you to get started.
- Streams and KQL tables with read (or higher) permission: Users can discover streams and KQL tables that they have read (or higher) access to within Real-Time Hub, which allows them to discover more data streams shared with them.
- Generate Real-time Dashboards (preview): Users can now quickly and automatically create real-time dashboards by selecting ‘Create Real-Time dashboards’ on KQL Tables. This CoPilot assisted feature can take users input to generate the most common real-time dashboards within seconds.
- Fabric Events (preview): Customers will soon be able to build event-driven applications, trigger Notebooks and workflows or send emails and Teams IM when OneLake files/tables are created, deleted or renamed (OneLake events) and Jobs are started or completed (Job Events).
- Explore Data action on KQL tables (coming soon): customers will soon be able to explore the data of their KQL tables with no-code experience, to allow them to interact with the data without leaving the context that they are in
- Azure Data Explorer (ADX) Database Shortcut (coming soon): customers will soon be able to create database shortcut for their ADX clusters. This will allow customers to manage their ADX clusters more efficiently directly from Fabric.
Real-Time Hub serves as the starting point for your Real-Time Intelligence journey. Please feel free to try it out and give us feedback through Ask Fabric Real-time Hub askrth@microsoft.com.
Announcing the general availability of Enhanced Eventstream
Enhanced Eventstream is now generally available! This offers new features that improve your experience in building stream flows within Fabric Real-Time Intelligence. The enhancements include Edit and Live View modes, Default and Derived Streams, and Smart Routing, transforming how data engineers handle real-time data streams with greater flexibility and efficiency.
- Edit Mode and Live View: Eventstream now offers two separate modes, Edit Mode and Live View, to give you flexibility and control over your data streams. Edit mode lets you design and modify your data streaming flow without interrupting the active data streams. Live View gives real-time insight into the data flow, allowing you to monitor the ingestion, processing, and distribution of data streams within the Fabric. You can switch between the two modes using the button in the top-right corner.
- To learn more, visit: Edit and publish Microsoft Fabric eventstreams – Microsoft Fabric | Microsoft Learn
- Default and Derived Streams: Data stream is a dynamic and continuous flow of data, allowing you to set up real-time alerts, and feed into different types of data stores. A data stream is a continuous flow of dynamic data that allows for real-time alerts and diverse data storage options. Default stream is created automatically when a streaming source is added to Eventstream, capturing raw event data directly from the source and preparing it for transformation or analysis. Derived stream is a specialized stream that users can set up as a destination within Eventstream. After performing operations such as filtering and aggregating, the derived stream is ready for further analysis or consumption by other organization members through the Real-Time Hub.
- To learn more, visit: Create default and derived Fabric eventstreams – Microsoft Fabric | Microsoft Learn
- Content-based Routing: Customers are now able to design stream operations directly within Eventstream’s Edit mode, transforming and routing of real-time data streams. It lets you create stream processing logic and direct data streams based on their content right in the Eventstream editor.
Announcing the general availability of connector sources in Eventstream
Connector sources in enhanced Eventstream are now generally available! This feature enables seamless connection of external real-time data streams to Fabric, allowing for an optimal out-of-the-box experience and more choices for real-time insights from a variety of sources. It supports well-known cloud services like Google Cloud and Amazon Kinesis, as well as database change data capture (CDC) streams through our new messaging connectors. These connectors utilize Kafka connect and Camel Kafka connectors for a flexible approach to data integration, ensuring broad connectivity across leading platforms. Additionally, Debezium is integrated for precise CDC stream capture.
Below is the list of generally available connector sources:
- Confluent Cloud Kafka
- Amazon Kinesis Data Streams
- Google Cloud Pub/Sub
- Amazon MSK Kafka
- Azure SQL Database Change Data Capture (CDC)
- Azure SQL Managed Instance (CDC)
- SQL Server on VM DB (CDC)
- PostgreSQL DB (CDC)
- Azure Cosmos DB (CDC)
- MySQL DB (CDC)
To learn more about sources regarding the details of the configuration, visit: Add and manage eventstream sources – Microsoft Fabric | Microsoft Learn. To ask for new connector sources, please contact askeventstreams@microsoft.com.
Introducing Azure Service Bus Connector for Eventstream (Preview)
Many enterprise customers rely on Azure Service Bus as a key message broker for managing queues and publishing subscribe topics. They want to integrate their messaging infrastructure with Fabric to enable seamless data streaming, high-performance processing, and real-time dashboards.
Now, we’re introducing the Azure Service Bus Connector for Eventstream! This connector allows you to stream messages from Azure Service Bus topics and queues directly into Eventstream. Once the messages are in Eventstream, you can process them in real time and route them to multiple destinations within Fabric. This new connector simplifies the integration process and empowers real-time, scalable data streaming from your Azure messaging sources.
Below, you will find how to add an Azure Service Bus source in Eventstream’s edit mode.
New Fabric events (Preview)
New Fabric event categories, namely OneLake events and job events, will be available at the end of November, in Preview in Real-Time Hub. These events can be used for real-time alerting and data processing through Reflex triggers and sending them to other destinations via Eventstreams.
OneLake events allow you to get alerted when changes occur in your OneLake. For example, when new files or folders are created or deleted. Users can use these events to automate workflows such as triggering a Data pipeline via the Reflex.
Job events provide detailed information about various job activities and statuses within Fabric. For example, status when a Data pipeline or Notebook is run. These events can include updates on job initiation, completion, failures, and any intermediary states or changes.
Learn more about Fabric events.
You can incorporate the two new Fabric events into Eventstream as a source if you want to direct these events to various destinations, including Eventhouse, Lakehouse, or your custom application via Eventstream’s custom destination endpoint. To learn more about how to add and configure these sources, please visit Add and manage eventstream sources – Microsoft Fabric | Microsoft Learn.
Eventstream Data Preview on database CDC sources
The enhanced Eventstream now includes Data Preview for database CDC sources. This feature allows you to view a snapshot of your data from the source in both Edit mode and Live View mode. In Eventstream’s Edit mode, the data preview captures a snapshot from the CDC source you configured, enabling you to infer the schema for configuring subsequent operators or destinations without having to publish it first and then return to Edit mode. You can access the data preview in the ‘Test result’ tab at the bottom pane by selecting the source node on the canvas in Edit mode.
Similarly in Live View mode, the data preview provides you with a snapshot from your sources so that you can understand what the data looks like inside your sources.
The supported connector sources are:
- Azure SQL Database Change Data Capture (CDC)
- Azure SQL Managed Instance (CDC)
- SQL Server on VM DB (CDC)
- PostgreSQL DB (CDC)
- Azure Cosmos DB (CDC)
- MySQL DB (CDC)
Monitoring experience on connector sources with Runtime Logs and Data Insights in Eventstream
Eventstream now offers Runtime Logs and Data Insights for the connector sources in Live View mode. With Runtime Logs, you can examine detailed logs generated by the connector engines for the specific connector, which assist in identifying failure causes or warnings. You can access this feature in the bottom pane of Eventstream by selecting the relevant connector source node on the canvas in Live View mode.
Data Insights provides metrics that are from the connector engine, aiding users in monitoring the connector sources’ status and performance. The Source Incoming/Outgoing Events display the number of records polled or produced by the task assigned to the specified source connector in the worker.
To learn more, please visit: Monitoring status and performance of an Eventstream item
Processing and routing events to Activator with Eventstream (Preview)
Fabric Activator (previously Data Activator) is a no-code experience for automatically taking action when patterns or conditions are detected in data. You use the activator item (previously reflex item) to manage the rules and actions. Fabric event streams under Real-time Intelligence, represented by ‘Eventstream’ as a Fabric item, aims to establish a centralized place on the Fabric platform for seamlessly capturing real-time events from diverse sources, transforming it, and routing it to various destinations.
Now, Eventstream supports processing and transforming events with business requirements before routing the events to the destination: Activator. When these transformed events reach Activator, you can establish rules or conditions for your alerts to monitor the events. To add this destination, simply choose Activator from the Destination menu in the ribbon while in Edit mode.
To learn more about sources regarding the details of the configuration, visit: Add an Activator destination to an eventstream
Introducing Eventstream’s CI/CD support
Collaborating on data streaming solutions can be challenging, especially when multiple developers work in the same environment. Conflicts, versioning issues, and deployment inefficiencies often arise. The integration of Fabric CI/CD tools for Eventstream in Microsoft Fabric has been developed to address these challenges and improve team collaboration.
Fabric offers complete CI/CD experience with a variety of tools, including Git integration and Deployment pipelines. By integrating Eventstream with these tools, developers can efficiently build and maintain Eventstream items from end-to-end in a web-based environment, while ensuring source control and smooth versioning across projects.
Key features include:
- Git Integration for Eventstream: Developers can collaborate freely using versioning and branching with their favorite git tools e.g., GitHub and Azure DevOps, preventing conflicts and enabling seamless teamwork.
- Deployment Pipeline for Eventstream: Accelerate and standardize Eventstream deployments to various stages, such as testing and production, with minimal manual effort in the Fabric UI.
Below, you will find how to commit an Eventstream change to a git repository:
With these powerful CI/CD capabilities, you can streamline your development workflow for Eventstream, isolate your development environments, and collaborate effortlessly with your team. Experience faster, more reliable development with Eventstream’s CI/CD support.
Automate Eventstream Item Operations with Eventstream REST APIs
Introducing Eventstream REST APIs, these APIs allow you to automate and manage Eventstream items programmatically, simplifying CI/CD workflows and making it easier to integrate Eventstream with external applications.
With Eventstream REST APIs, you can:
- Automate Eventstream deployments within your CI/CD pipeline.
- Perform full CRUD (Create, Read, Update, Delete) operations on Eventstream items programmatically.
- Seamlessly integrate Fabric Eventstream into external applications.
- Scale your streaming solutions quickly and efficiently.
By leveraging these REST APIs, you can create fully automated workflows that enhance the quality, reliability, and productivity of your Eventstream items.
Stream Data to Eventstream Securely using Entra ID Authentication (Coming Soon)
Introducing Entra ID authentication for Eventstream’s Custom Endpoint! This feature enhances security by allowing users to stream data to Eventstream without relying on SAS keys or connection strings, reducing the risk of unauthorized access. Entra ID authentication ties user permissions directly to the Fabric workspace access, ensuring that only authorized users can access the workspace and stream data to Eventstream. Check out the screenshot below to see how this feature appears in Eventstream’s Custom Endpoint!
Additionally, Tenant Admins now have the option to disable Eventstream’s key-based authentication in tenant settings, further securing the eventstream by enforcing the use of Entra ID authentication only.
Analyze & Transform
Eventhouse monitoring (Preview)
Fabric workspace monitoring is the centralized logging solution of Fabric. Workspace monitoring is designed to provide a seamless and consistent monitoring experience with end-to-end visibility across all Fabric items.
Workspace monitoring is based on the Real-time Intelligence Eventhouse KQL database. Once Fabric Monitoring is enabled, a KQL database is created to store all the workspace items event logs. KQL databases are ideal for time series logs and metrics monitoring solutions.
For each one of the supported items, one or more events or metrics tables are created. Here you can see the tables supporting Eventhouse query, command and ingestion monitoring, and semantic model query logs.
The Eventhouse Monitoring offers 5 events and metrics tables:
- EventhouseQueryLogs – logs all Eventhouse KQL queries.
- EventhouseCommandLogs- logs all Eventhouse commands.
- EventhouseDataOperations – logs all successful data operations including Batch ingestions, Streaming seal operations (operations that store streaming data to database extents), Materialized views updates, and Update policy table updates.
- EventhouseIngestionResultLogs – logs all successful and failed ingestions.
- EventhouseMetrics- set of metrics that provide in depth monitoring of ingestions, materialized views, and continuous exports.
Users can explore and directly query workspace monitoring tables using KQL or SQL, with example queries available in the documentation. Here is an example of monitoring queries stored in an Eventhouse KQL QuerySet:
The workspace monitoring solution centrally monitors all the Power BI reports, semantic models and Eventhouses items created in the workspace. In some cases, the Power BI semantic models read data from KQL database sources, however semantic models queries are logged in workspace monitoring solution regardless of the Power BI report data source.
Realtime Dashboards can be created on top of the workspace monitoring KQL database, providing an easy graphical monitoring user experience. Real time dashboard templates can be imported to provide an out of the box monitoring experience.
In this example real time dashboard, you can see Semantic Model CPU usage monitoring, with a direct link to the underling KQL queries being run by the Power BI report. Users can troubleshoot issues by correlating events across semantic models and their underlying databases.
Users can troubleshoot spikes and activities and investigate who is consuming these resources. The user can easily zoom into the exact spike time and determine which user, or application generated the usage peak. They can then drill down to the specific query log record.
In summary, workspace Monitoring offers a centralized monitoring solution, allowing users to efficiently monitor and troubleshoot their workspace items. Specifically, for Eventhouse, query, command and ingestions events and metrics logging enable advanced Eventhouse monitoring capabilities.
Learn more about Eventhouse Monitoring
Eventhouse Query Acceleration for Shortcuts (Preview)
Shortcuts are embedded references within OneLake that point to other files’ store locations without moving the original data.
Previously, you could create a shortcut to OneLake delta tables using Eventhouse and query the data, but performance lagged direct ingestion in Eventhouse, as shortcut queries lacked the powerful indexing and caching capabilities of Eventhouse.
Query acceleration indexes and caches data landing in OneLake on the fly, allowing customers to run performant queries on large volumes of data. Customers can use this capability to analyze real-time streams coming directly into Eventhouse and combine it with data landing in OneLake either coming from mirrored databases, Warehouses, Lakehouses or Spark.
Customers can expect significant improvements by enabling this capability, in some cases up to 50x and beyond.
How to enable Query Acceleration?
You will now see an option to enable Acceleration while creating a new shortcut from Eventhouse.
Learn more about Real-Time Intelligence
Synapse Data Explorer to Eventhouse migration (Preview)
Synapse Data Explorer (SDX), part of Azure Synapse Analytics, is an enterprise analytics service that enables you to explore, analyze, and visualize large volumes of data using the familiar Kusto Query Language (KQL). SDX has been in public preview since 2019.
The next generation of SDX offering is evolving to become Eventhouse, part of Fabric Real-Time Intelligence. Eventhouse offers the same powerful features and capabilities as SDX, but with enhanced scalability, performance, and security.
For customers looking to migrate to Eventhouse from SDX, we are happy to announce a seamless migration capability. Customers can use the migration API to seamlessly move their SDX cluster to Eventhouse with minimal disruption, learn more.
New explorer for database objects in KQL Queryset
Effortlessly browse through the database your current Queryset is connected to, viewing tables, functions, materialized views, and more.
Double-click any object to instantly copy its name to the query editor, making query writing easier than ever.
When opening the data source switcher, you can easily refresh the data or disconnect it from the KQL Queryset if it’s no longer needed:
In addition, you can now apply actions directly from the explorer. Simply click the ellipsis next to any object to access a menu with options tailored to your selection.
Entity Diagram view in KQL Database (Coming Soon)
A new feature in the KQL Database page enables you to visually explore relationships between database entities—such as tables, functions, materialized views, update policies, external tables, and continuous exports—through an interactive graph visualization. This helps you efficiently manage your database and gain a clearer understanding of how these entities interact.
Sample Scenarios
Proactively manage dependencies
With this visual representation, you can easily manage dependencies between entities such as tables and functions. For instance, when renaming a table or modifying its schema, you can immediately see which functions are using that table as part of their KQL body. This proactive approach helps prevent unintended consequences and ensures smoother updates to your database structure.
Track data sources behind Materialized Views
The new feature also lets you trace the relationships between materialized views and their underlying source tables. This makes it simple to identify original data sources, allowing you to track and troubleshoot data flow more effectively.
Interact with elements and act
You can click on any element in the graph to see its related items, while the rest of the graph is greyed out, making it easier to focus on specific relationships.
For tables and external tables, additional options become available, such as querying the table, creating a Power BI report based on the table, and more.
Track record ingestion
Additionally, you can easily track how many records have been ingested into each table and materialized view. This clear view of data flows helps you stay on top of ingestion size and volume, ensuring your database processes data correctly.
Summary
This visual enhancement simplifies database management and helps you optimize your data structures, making it easier to track dependencies and take actions quickly.
Visualize & Act
Easily share Real-Time Dashboards with others
Microsoft Fabric’s new real-time dashboard permissions feature brings granular control to how users interact with real-time analytics. With the introduction of separate permissions for dashboards and underlying data, administrators now have the flexibility to allow users to view dashboards without giving access to the raw data. This separation is key for organizations that need to ensure data security while providing actionable insights to a broader audience.
Fabric permissions focus on how users interact with the dashboard itself, determining who can view, edit, or share the dashboard. Meanwhile, data source permissions ensure that only authorized users can access the raw data that powers these visualizations. This division improves the overall security posture by ensuring users have only the necessary level of access.
An added benefit of this feature is the option to choose between pass-through and editor’s identity for handling data access. Pass-through allows users to access data using their own credentials, while editor’s identity uses the dashboard editor’s permissions. This ensures that the system is adaptable to different collaboration scenarios and aligns with organizational needs.
Overall, these enhancements to real-time dashboard permissions in Microsoft Fabric promote secure, efficient, and tailored access to data and dashboards. This flexibility empowers teams to collaborate more effectively while maintaining strict control over data access, making it a valuable addition for organizations leveraging real-time intelligence.
Learn more about Real-Time Dashboards permissions (Preview).
Announcing the general availability of Real-Time Dashboards
Real -Time Dashboards is now generally available in Microsoft Fabric, bringing fast, actionable insights to your fingertips. Real-Time Dashboards make it easier than ever for organizations to track and act on key metrics in real-time, empowering faster decisions and deeper insights without the need for complex coding.
Unlocking the Power of Live Insights
With Real-Time Dashboards, you can monitor critical data events as they happen. Users can now set auto-refresh rates as low as 10 seconds or even continuous updates for real-time data streams, ensuring you stay up to date on every key metric.
Flexible, Secure Data Sharing
A new feature announced as part of the general availability of Real-Time Dashboards is the separation of permissions for dashboards and underlying data. Administrators can now grant dashboard access without exposing raw data, allowing teams to make data-driven decisions while maintaining strict data security. This separation of permissions is particularly valuable for organizations that need to ensure compliance and protect sensitive information while still sharing key insights broadly.
Effortless, No-Code Data Exploration
Our no-code ‘Explore Data’ functionality empowers users of all technical backgrounds to go beyond the dashboard’s insights. Now, anyone can dive deeper into metrics, explore underlying data, and analyze trends—all without needing to know KQL or write queries. With “Explore Data,” you can filter, drill down, and interact with data using a user-friendly UI, gaining a clear understanding of what’s driving changes or fluctuations.
Start Gaining Real-Time Insights Today
Real-Time Dashboards in Microsoft Fabric are designed for users who want to transform data into action, faster. By delivering continuous updates, enhancing security, and offering intuitive data exploration, Real-Time Dashboards provide everything you need to harness the power of live data.
Try it today and start unlocking the full potential of your data!
Announcing the general availability of Activator
Real-Time Intelligence Activator is now generally available! We would like to extend our gratitude for your invaluable partnership and feedback throughout Data Activator’s development as we help your organizations go from insights to action.
With GA, you’ll be able to:
- Stay on top of your critical metrics by monitoring your business objects. You can track and analyze key business objects such as individual packages, households, refrigerators, and more in real-time, ensuring you have the insight needed to make informed decisions. Whether it’s understanding how individual instances of your business objects impact sales figures, inventory levels, or customer interactions, our monitoring system provides detailed insights, helping you stay proactive and responsive to changes in your business environment at a fine-tuned level of granularity.
- Unlock the full potential of creating business rules on your data with advanced data filtering and monitoring capabilities. This update offers a wide array of options for filtering, summarizing, and scoping your data, allowing you to tailor your analysis to your specific needs. You can set up complex conditions to track when data values change, exceed certain thresholds, or when no new data has arrived within a specified timeframe.
- Ensure your communications are perfect before hitting send by seeing a preview of your Email and Teams messages. This will allow you to see a preview of your message exactly as it will appear to the recipient. Review your content, check formatting, and make any necessary adjustments to ensure clarity. With this feature, you can confidently have Data Activator send messages on your behalf knowing they look just the way you intended.
- Set up rules that trigger automatically with every new event that comes in on your stream of data. Whether you need to send notifications or initiate workflows, this feature ensures that your processes are always up-to-date and responsive.
- We renamed our feature to help create clarity about what it is and what it does and simplify the way you discover Data Activator and create actionable rules. If you are used to seeing Reflex, please note that it is now called Activator. The items you create to set up rules and actions are, therefore, activators. You can find Activator tile in Fabric Real-Time Intelligence section. We hope that, however small, these changes simplify the process of getting started with Data Activator.
Activator billing is now enabled, and it is based on the following four meters:
Compute resources
- The number of rules running, and the duration of time these rules have been up. Each active rule comes with a uniform ‘uptime’ cost.
- The number of events per second ingested.
- Evaluating the rules and, when the conditions are met, triggering the defined action.
Storage
- The Fabric storage charges are based on the storage consumed for events and activation retention. The retention policy by default is set to 30 days.
Should your account run out of capacity, we will show you an in-product banner and send email notification. You can always track and review your Activator capacity usage and, if needed, update it to fit your business needs. The formal billing for Activator usage will begin with our GA announcement on Nov 18.
To learn more, find our documentation Activator. As always, we’d love to hear your feedback and look forward to hearing from you. Stay tuned for the detailed RTI Billing Blog post for more details.
Real-Time Intelligence demos
Data Factory
Table and partition refreshes added to semantic model refresh
One of the most popular features that we built in Fabric Data Factory came from our customer patterns that we observed being used in ADF and from our community. That is the semantic model refresh activity. After first releasing this pipeline activity, we heard your request to improve your ELT pipeline processing by including an option to refresh specific tables and partitions in your semantic models. We are super pleased to announce that we’ve now enabled this feature making the pipeline activity the most effective way to refresh your Fabric semantic models.
Learn more about Semantic model refresh activity
Import and export your Fabric Data Factory pipelines
As a Data Factory pipeline developer, you will often want to export your pipeline definition to share it with other developers or to reuse it in other workspaces. We’ve now added the capability to export and import your Data Factory pipelines from your Fabric workspace. This powerful feature will enable even more collaborative capabilities and will be invaluable when you troubleshoot your pipelines with our support teams.
New connectors available
In the Data Factory, both data pipeline and dataflow gen 2 now natively support the Fabric SQL Database connector as source and destination.
Additionally, data pipeline expands its connectivity to include the ServiceNow connector (source) and MariaDB connector (source).
Worth mentioning, Iceberg format is newly introduced in data pipeline first. As the first click-stop, you can now use data pipeline to write data as iceberg format via Azure Data Lake gen2 connector.
Alongside these new connectors, there’re numerous feature enhancements to existing connectors. Highlights include continued improvements to the following connectors:
- The Snowflake connector with added support for the China domain in data pipeline and dataflow gen2.
- The Dataverse connector with enrichment on the authentication type support in data pipeline.
- Both PostgreSQL connector and Azure PostgreSQL connector with the capability to customize query timeout in data pipeline.
Simplify data ingestion with Copy Job – CI/CD upsert & overwrite
Copy Job simplifies data ingestion, providing a seamless experience from any source to any destination. Whether you need batch or incremental copying, Copy Job provides the flexibility to meet your data needs while keeping things simple and intuitive.
Since the Public Preview launch at FabCon Europe in late September, we’ve been rapidly enhancing Copy Job with powerful new features. Here’s our latest update:
- Copy Job now supports CI/CD capabilities in Fabric, including Git integration for source control and ALM Deployment Pipelines. Check out the details in CI/CD for copy job in Data Factory – Microsoft Fabric | Microsoft Learn.
- Copy Job now also offers expanded writing options: Upsert functionality for SQL DB and SQL Server, and an Overwrite option for Fabric Lakehouse—bringing added flexibility and control to data movement. Check out the details in What is Copy job (preview) in Data Factory.
New capabilities in Copilot for Data Factory to efficiently build and maintain your Data pipelines
The new Data pipeline capabilities in Copilot for Data Factory are now available. The new capabilities are now in preview and serve as an AI assistant to help users to effortlessly create data integration solutions with data pipelines, to easily understand complex data pipelines and to efficiently troubleshoot data pipeline error messages.
Create a new data pipeline and click on the Copilot button on the home tab to get started. Easily get started to build the pipeline with three starter options and clear guidance.
Copilot for Data Factory can easily understand your intent and business requests to transform them into data integration solutions. You can either easily set up your data pipeline with pre-filled prompt step by step or you can efficiently create your data pipeline with a comprehensive prompt.
Copilot for Data Factory also improves Data pipeline troubleshooting error messages experience. It provides clear explanations and actionable recommendations for you to identify and resolve the Data pipeline errors easily.
Copilot for Data Factory can quickly summarize your pipeline for better understanding, which is extremely useful in collaborative scenarios involving complex data pipelines. You can get the complex pipeline summary either by clicking on ‘Summarize this pipeline’ option or sending ‘Summarize this pipeline’ prompt. Then you will get a very clear explanation of the complex pipeline developed by other team members.
OneLake datahub is now the OneLake catalog in Modern Get Data experience
We are pleased to announce that the OneLake datahub has been rebranded as the OneLake catalog in Modern Get Data. When you use Get data inside Pipeline, Copy job, Mirroring and Dataflow Gen2, you will find the OneLake datahub has been renamed to OneLake catalog.
The OneLake catalog represents the next evolution of OneLake data hub, offering a cohesive platform. In the current Modern Get Data, the OneLake catalog will keep the same functionality as previous OneLake Datahub. In the future, we will expand the OneLake catalog functionality to allow data engineers, scientists, analysts, and decision-makers to seamlessly explore, organize, and oversee their data in one comprehensive and user-friendly location.
Dataflows now support CI/CD (Preview)
With Dataflows Gen2, you can now leverage the benefits of GIT integration and CI/CD support. By enabling GIT integration within your workspace, you can store your dataflow definitions into git and branch to other workspaces, collaborating on the same dataflow. This improves your end-to-end experience, especially when working across for dev, test, and prod workspaces.
We look forward to learning more from your feedback to improve the experience.