Microsoft Fabric July 2023 Update
Welcome to the July 2023 update. We have features in Core, Synapse, Data Factory, Data Activator, Community, and Power BI.
- Help Pane
- Monitoring Hub improvements
- Data Warehouse
- Data Engineering
- Data Science 1
- Real-time Analytics
- Dataflows Gen2
- Power Query Editor
- Gateway and Connections
- Data pipelines
- Sample data
- Adding multiple streams or properties
- Designing triggers and alerts
- Trigger lifecycle
- Testing actions
- Teams app
- Implementing a Lakehouse with Microsoft Fabric Courses
- Filter by product for User Groups & Events
- Data Activator Community now live
Help pane is feature-aware and displays articles about the actions and features available on the current Fabric screen. It is also a search pane that quickly finds answers to questions in the Fabric documentation and community.
Open the Help pane
From the upper-right corner of the Fabric screen, select the ? Icon.
Help pane is feature-aware
The feature-aware state is the default view of the help pane when you open it without entering any search terms. It shows a list of recommended topics, resources that are relevant to your current context and location in Fabric, and a list of links for other resources. It has three sections:
- Feature-aware documents: This section groups the documents by the features that are available on the current screen. As you explore Fabric, the feature-aware documents update based on what you’ve selected and where you are in Fabric. This is a great way to learn how to use Fabric. Give yourself a guided tour by making selections in Fabric and reading the feature-aware documents.
- Forum topics: This section shows topics from the Community forums that are related to the features on the current screen. Select a topic to open it in a new separate browser tab.
- Other resources: This section has links for feedback and Support.
Help pane is a search engine
Enter a keyword to find relevant information and resources from Microsoft articles and Community forum topics. Use the dropdown to filter the results.
With the new “column options” feature in Monitoring Hub, you can now customize the columns in the job list to check the information you are interested in. The columns selections would be memorized automatically. Feel free to go to other places, we will keep the columns and the filters for you once you are back at the monitoring hub.
On July 5th, 2023, Microsoft began a staged rollout of an update to the Fabric Public preview setting. After the update, Fabric Public preview will be ON by Default unless customers explicitly opt out. With Fabric Public preview set to ON, users in your organization will be able to create Fabric items in workspaces attached to a Power BI Premium or Fabric capacity.
When the Microsoft Fabric Preview was launched on May 23, 2023, Fabric was disabled by default, using a new tenant setting named “Users can create Fabric items (public preview)”. Starting July 5th, 2023, Microsoft started activating Fabric preview by default for tenants who have not explicitly opted out. Any capacity level overrides of this setting will remain unaffected.
If you are comfortable with your organization using the Fabric preview features, no further action is required. However, if you want to restrict usage of the Fabric preview in your P-SKU and F-SKU capacities, you have two options:
- Use the security group configuration of this setting at the tenant and capacity levels to limit Fabric to a smaller set of users.
- Turn off the setting at both the tenant and capacity levels to prevent all users from accessing the Fabric preview features.
If you make these updates before the automatic update rolls out your tenant, this tenant setting update will not impact your organization. You may also make these changes at any time after the update.
For more information about controlling Fabric use, visit Microsoft Learn.
With the latest OneLake file explorer v220.127.116.11, it’s simple to choose and switch between different Microsoft Azure Active Directory (AAD) accounts. This was a highly requested feature from users who work with multiple AAD tenants, across organizations or even within a single organization.
To switch accounts, right click the OneLake icon in the Windows notification area, select “Account” and then “Sign Out”. Signing out will exit OneLake file explorer and pause the sync. To sign in with another account, start OneLake file explorer again by searching for “OneLake” using Windows search (Windows + S) and select the OneLake application.
Starting July 2023, you can share Fabric items like data warehouses, lakehouses, Spark job definitions, Kusto databases, and KQL query-sets with users or groups. This enables collaboration with users that are not in workspace roles. Admins, Members, or users that have been granted reshare permission on a specific item, can share it with additional users outside of the workspace. You can share an item by clicking on Share in the item list or within the item.
When an item is shared, the recipients can discover the item in Data Hub and may also receive a link to the item via email (if the option is selected while sharing). While sharing, a user can choose the level of access that the recipient can have. For example, when sharing a lakehouse, you can choose to grant the recipient with Read All SQL endpoint data permission, in addition to the read permission. This enables the recipient to read the default dataset associated with the lakehouse and access lakehouse data through the SQL endpoint. Here’s an example of sharing a lakehouse:
Depending on the item being shared, you may find a different set of permissions that you can grant to recipients when you share. Read permission is always granted during sharing, so the recipient can always discover the shared item in Data Hub and open it. Here’s an example of sharing a warehouse:
Sharing a Kusto database
You can also grant or revoke permissions on specific items to users by selecting manage permission from the context menu. For example, you can grant the reshare permission on a Lakehouse to a user in the Workspace Contributor role. This would allow the contributor to share the lakehouse with users outside of the workspace. Note that you cannot modify the permissions inherited from a workspace role.
- Select manage permission from the context menu
2. Select Add user on Direct access tab and enter the names of users or groups that you want to provide access to.
3. Select grant
To learn more about sharing Fabric items, read: Share items in Fabric. For more information on sharing of specific items, please read Share your Warehouse and manage permissions and How lakehouse sharing works?
SQL statistics are now automatically updated by the query engine! In the Fabric Data Warehouse and Lakehouse SQL Endpoint, statistics are a critical tool for helping your query run quickly and efficiently. When a query is executed, the engine will try to collect existing statistics for certain columns in the query and use that information to assist in choosing an optimal execution plan.
Today, column statistics are automatically generated when the query engine requires statistics on columns that don’t yet have any. When data in your table’s columns changes significantly, it’s important that those statistics objects also update to accurately reflect the new data. Previously, this meant users would need to manually update on a regular basis. Now with automatic statistic updates, any statistics required during a user query are automatically assessed and refreshed if determined as outdated, allowing your query to use the most precise plan for execution and ensuring your workload is positioned for the best performance possible – all with zero user intervention.
For more information on automatic statistic updates, Statistics – Microsoft Fabric | Microsoft Learn
You can now update and delete data in your target table from an existing source table using the FROM argument in your UPDATE and DELETE scripts in Fabric SQL respectively! These T-SQL commands allow you to perform MERGE T-SQL like operations. For more information, see UPDATE (Transact-SQL) – SQL Server | Microsoft Learn and DELETE (Transact-SQL) – SQL Server | Microsoft Learn
The Fabric SQL engine creates a query plan that comprises of execution steps and data move operations. Data move operations ensure that the data required for a query step is ready in the location of its execution. With this new optimization in Fabric SQL, the engine makes data movement in intermediate steps of query execution more balanced and improves overall query performance. No user intervention or code change is required to use this optimization; it works out of the box!
You can now create a new Dataflow Gen2 directly from your warehouse to ingest data. Simply click Get Data –> new Dataflow Gen2. The newly created Dataflow Gen2 artifact will already prepopulate the destination to the warehouse. For more information about creating, ingesting, and transforming data with Dataflows Gen2, see Create your first Microsoft Fabric dataflow – Microsoft Fabric | Microsoft Learn.
We are pleased to announce that zero copy Table Clones are now available in Public Preview! Zero copy clones are a near-instantaneous metadata only operation that enables you to easily create a copy of your Warehouse table(s) with no additional cost and minimal overhead. Table clones contain a reference to the source table that the clone was created from. Underlying parquet files are not duplicated when a clone is created – under the hood, a fork is created, and the clone behaves as an independent table which can be modified as needed. Any changes made to the source table after it was cloned are not reflected in the clone; similarly, any changes made to the cloned table are not reflected in the source. They are independent of one another.
As of today, customers can create a Table Clone within the same schema or to a different schema within the same Warehouse.
Customers looking to create a Zero Copy Table clone would typically do so for a variety of test, development and production use cases and experimentation. For example, a customer may want to stage a production release in a table clone prior to going live with the changes. Once they are happy with the changes, they can be merged to the production table.
For more information on Table Clones, see Clone table in Microsoft Fabric
We are excited to announce that Data warehouse sharing is now available in Public Preview! Data sharing is essential to fostering a data-driven culture within an organization. Sharing a Warehouse allows you to easily provide read access to enable downstream users to consume this data, without making copies of data.
With this new capability, an Admin or Member within a Fabric workspace can share a Warehouse with another recipient (AAD user or AAD groups) within your organization. The following are the permissions that are provided:
- [Default] Connect permissions to the warehouse – This option is provided by default and it provides permissions to connect to the warehouse (equivalent of Connect permissions in SQL) but not query any table or view. You can grant granular object access using GRANT in T-SQL.
- [Default] Build reports on the default dataset – This option is provided by default and provides “build” permissions on the default dataset that is connected to your Warehouse. This option can be useful for your Power BI developers who want to create reports on this default dataset.
- [Optional] Read all data using SQL – This option provides “readData” (equivalent of db_datareader) permissions which allows for read access to all tables and views within the Data warehouse. This option can be useful for users who want to read using SQL.
- [Optional] Read all data using Apache Spark – This option provides “readAll” permissions which allows read access to the Warehouse’s underlying files in One Lake that you can read through Spark. This option can be useful for your data scientists who want to read using Spark.
For more information, see the [detailed sharing blog].
We are excited to announce the preview of dbt plugin adapter for Synapse Data Warehouse in Microsoft Fabric (preview). This data platform-specific adapter plugin allows you to connect and transform data in Synapse Data Warehouse in Microsoft Fabric.
For more information, see the Introducing the dbt adapter for Synapse Data Warehouse in Microsoft Fabric, Microsoft Fabric Synapse Data Warehouse dbt adapter setup and Microsoft Fabric Synapse Data Warehouse dbt adapter configuration.
Previously the “Load to Table” feature allowed users to load a single file to a new table, which was very well met with data engineers due to the added productivity value of quickly using simple right click actions to enable table loading on Files and as well as because of the no-code experience, which lowers the entry bar for all personas.
This new release brings improvements to this experience with several new functionalities:
- Folder-level load: Users can now load all files under a folder and its subfolders at once by selecting “Load to Delta Table” after clicking on a folder. This feature automatically traverses all files and loads them to a Delta Table.
- Load to existing table: User can now choose to load their files and folders to a new or an existing table of their choice. If they decide to choose to load to an existing table, they have the option to either append or overwrite their data in the table.
- Source file option: User can specify if their source file includes the column names as the first row of data and the separator used in it.
For more detailed information on this feature, visit the documentation here.
We are announcing a capability for users with Admin and Member roles to share an individual Lakehouse with users without providing them access to the workspace. That will grant access to a specific lakehouse without exposing other items in the workspace. The users will get access to shared items through Data Hub or the link included in the sharing notification email. With access to a lakehouse, users can access the SQL endpoint and default dataset. That provides features like querying data using T-SQL and building Power BI reports on top of the lakehouse data. Permission management also allows users in Viewer role to get additional permissions to access Lakehouse data using Spark. These features will push data democratization in Fabric even further, enabling more collaborative work.
We are excited to announce the “Notebook resources” on Fabric notebook. This feature offers notebook users a writeable file system space where you can store small-sized files, such as code modules, datasets, and images. You can access them with code in the notebook as if you were working with your local file system. The Notebook Resource explorer provides a Unix-like file system to help you manage your folders and files. You can use common operations such as create/delete, upload/download, rename, duplicate, and search through the UI, and rich built-in snippets are provided through “Drag & Drop”.
For more details please see How to use notebooks – Microsoft Fabric | Microsoft Learn.
Fabric notebook now supports displaying the running cell output after reconnecting to the original session. This feature allows you to easily recover your ongoing work after accidentally closing the browser or leaving the live session. You don’t need to take any additional action to enable it, as this feature is available by default on the Notebook.
Starting now, sharing a single notebook with your colleagues is easier than ever before, without having to grant workspace permissions. With the Notebook sharing feature, you can collaborate with team members and share your work conveniently. Additionally, we now support managing permissions for each Notebook instance. You can easily check and update the permissions of notebooks after they have been shared, ensuring proper access to the notebook.
Notebook recently upgraded the design of status bar. In this upgrade, we have styling refinement as well as functional enhancements. Now you can easily discover the “Save options” status, you can easily navigate to the failed cell, and you can find more useful information in the floating info card – the diagnostic information is especially helpful when you encounter issue and need service support! In the next release we’ll add another series of quick access entries on the status bar, so stay tuned for the upcoming new features!
In our latest release, we have integrated the Azure OpenAI service with the distributed machine learning library SynapseML, which makes it easy to use the Spark distributed computing framework to process millions of prompts with the OpenAI service on Microsoft Fabric.
The new OpenAI APIs that have been introduced into SynapseML include “OpenAICompletion”, “OpenAIEmbeddings”, “OpenAIChatCompletion”, and “OpenAIPrompt”. The following helps to better understand these new OpenAI APIs.
Leveraging the new OpenAI APIs in SynapseML and Microsoft Fabric, we have also demonstrated how to perform Q&A on PDF Documents. You can read more about it here. Please note that native access to the Azure OpenAI service will be coming to Microsoft Fabric later this year.
Azure Event Hubs is a big data streaming platform and event ingestion service that can process and direct millions of events per second. Now you can easily stream your Azure Events Hubs data directly into your Fabric KQL Database.
There are 2 main steps required to stream the Event Hubs data to the KQL Database:
- Create a Microsoft Fabric platform-based data connection to a specific event hub instance. This data connection can be used across all Microsoft Fabric workspaces and is managed centrally.
- Connect this Microsoft Fabric-based data connection to a KQL database. This process creates a database-specific Event Hubs data connection. The connection streams data into the table you specified during setup, and the data will then be available to query using a KQL queryset.
A pre-requisite for creating a cloud connection in Microsoft Fabric is to set a share access policy (SAS) on the event hub and collect information to be used later in the setting up the cloud connection. This step is performed in the Azure portal. Go to your event hubs instance and under Settings select the Shared access policies. Add a new SAS Policy or select an existing one:
To Create the Cloud Connection from Azure Event Hubs and Fabric, go to the menu bar of your Fabric workspace and go to Manage Connection and gateways. In the New Connection form, you will need to enter details taken from the defined SAS policy.
The final step is to connect a table in your KQL database to the event hub cloud connection defined above. This can be done by selecting the Event Hubs option in the Get Data menu bar:
For more details, go to the docs: Get data from Azure Event Hubs.
In manufacturing and energy verticals, MATLAB is still heavily used. MATLAB is a programming and numeric computing platform used to analyze data, develop algorithms, and create models. We are happy to announce that Fabric now supports the querying of KQL database data directly from MATLAB. Now engineers that are familiar with MATLAB do not need to learn KQL to query the high performant RTA KQL database, but rather reuse their MATLAB skills to query the data.
This functionality is supported by a very lightweight MATLAB Connector that securely connects MATLAB to a Fabric KQL Database.
For more details, go to the docs page: Query data using MATLAB.
- Auto-fix column names during mapping of columns dialog
- Auto-fix data types during mapping of columns dialog
Rename a dataflow inside of the Power Query Editor
A similar experience to other artifacts inside of Microsoft Fabric, you can now change the name of a Dataflow Gen2 inside of the Power Query Editor.
The Google Analytics connector has been updated to support Google Analytics Data API (Google Analytics 4). To use this new functionality, use “Implementation 2.0” when connecting. Existing connections will not be affected.
The Oracle connector has been updated to enable Azure AD-based Single Sign-On functionality through the on-premises data gateway. This will require the July release of the on-premises data gateway.
The Azure Databricks and Databricks connectors have been updated. Please find notes from the Databricks team below.
- Add a new DSRHandler to databricks-multicloud
- Fix UC_NOT_ENABLED and Catalog ‘spark’ not found error in legacy code path using Databricks.Contents
The Denodo connector has been updated. Please find notes from the Denodo team below.
- This new version adds graphical support for the specification of native SQL queries at data source creation time
The EQuIS connector has been updated. Please find notes from the EQuIS team below.
- Remove “Beta” attribute
- Retrieve report content as .csv to remove the row limitation of .xlsx files
- Optimize handling of facility groups in navigation tree
- Show report and/or location folders in navigation tree even if one or the other is empty
The Snowflake connector has been updated to include various performance improvements, such as usage of SQLBindCol. Users should experience better performance when running queries.
The Anaplan connector has been updated. Please find notes from the Anaplan team below.
- This version of Power BI connector for Anaplan includes backend changes for compatibility with ongoing Anaplan infrastructure updates. There is no change to user facing connector features.
Added support for Single Sign-On (SSO) via Azure AD in cloud connections. Currently this feature is only applied for DirectQuery mode in datasets. However, we plan to progressively extend SSO capabilities to other Fabric workloads in the future.
We have enhanced our security measures to allow users to disable cloud connections to be used with gateway connections. This precaution prevents the decryption and logging of cloud connection credentials within on-premises systems.
We’re excited to announce that the Teams activity is now available to use in your Data Factory data pipelines. In your pipeline, you can use the Team activity to customize a message to send to a Teams channel or a Teams group chat. For example, you can use the Teams activity to send a notification if a pipeline has failed, helping you to better monitor your data integration pipelines.
We’re excited to share that you can now parameterize Fabric artifacts in your data pipeline. This will allow you to use expressions, functions, parameters, and variables to dynamically refer to your Lakehouse or Data Warehouse, allowing more flexibility when designing your data pipelines.
We’ve made performance improvements when copying parquet files to your Lakehouse!
“Save as” is now supported by data pipelines from your Data Factory workspace. When you are in your workspace, you can use the selection menu to save a new copy of your pipeline. This will allow you to build upon or edit existing pipelines without having to completely rebuild a data pipeline.
Data pipelines now supports column mapping when a Lakehouse is selected as a data destination. In the Mapping tab, you can now add, edit, or delete column mappings from your data source to your data destination.
The simulator that generates sample data now includes multiple event streams that you can use to try out building objects and alerts. If you want to try the simulator, check out our tutorial: End-to-end tutorial using simulated data.
The Data screen now gives you options to create objects directly from columns in your data streams, and gives you shortcuts to quickly create properties directly rather than having to create them and manually select the column.
We’ve also planned a new UX to let you create multiple properties or objects across multiple streams in one go – stay tuned for that in the coming months!
We’ve combined the first two steps of triggers (Reference to a property and selecting the value from an event) into one step. This means the first thing you’ll see is a chart that plots the values you select helping you understand the data more quickly. It’s currently called “Property field reference”, which will be updated to be more user-friendly soon!
We’ve added many more functions to the library you can use to build triggers:
- Detect functions.
- Changes, Changes from/to
- Sends an event each time the field changes from/to a value, or changes at all.
- Is less than/greater than, Is false/true, Is equal/not equal
- Sends an event whenever the field meets that condition.
- Becomes less than/greater than, becomes true/false, Exits/Enters range
- Sends an event the first time the field meets that condition (if subsequent events also meet that condition, no further events are emitted)
- Changes, Changes from/to
- Summarize functions.
- Maximum, minimum, average, count over time.
- Calculates a aggregation of the values of all events in a time window.
- Maximum, minimum, average, count over time.
- Filter functions
- Filters individual events where the value meets a specified condition. Other events are dropped. This is useful for filtering out error values such as a sensor that returns -99 for errors.
- Property filter
- Filters out instances that match the criteria. This is useful if you only want a trigger to fire for certain instances (this will be more usable when the trigger changes described below are done).
‘Detect’ functions such as ‘Crosses above’ or ‘Changes to’ also have options to only trigger when the criteria is met a certain number of times, for example 3 times in an hour. This can help reduce noise in your triggers.
Coming soon, we have a major change to the way triggers are defined. Early preview feedback showed that the detailed step-by-step setup for triggers was too complicated, so we’ve geared them around selecting 3 things: what you what to monitor, what condition you want to detect, and what action to take. This should make it easier to configure your triggers and make it clearer what’s needed to get alerts on your data.
Based on feedback to make the management of triggers simpler, we removed the Draft concept. You don’t need to Publish a trigger before you can start it. Triggers are either just Started or Stopped! Any time you make changes, you select Update to change the running trigger to use the new values.
The ‘Test action’ button confused early users so we’ve updated the messaging to clarify that it will send a test to the current user. It’s also only enabled if there is some data that met the criteria, which is used as sample data in the test message.
Data Activator can now send alerts to you from our new Teams app. To install it, search the Teams store! A Teams administrator can set it up for everyone in your organization if you need to deliver messages broadly, rather than each user installing it themselves.
This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept. This course will explore the powerful capabilities of Apache Spark for distributed data processing and the essential techniques for efficient data management, versioning, and reliability by working with Delta Lake tables. This course will also explore data ingestion and orchestration using Dataflows Gen2 and Data Factory pipelines. This course includes a combination of lectures and hands-on exercises that will prepare you to work with lakehouses in Microsoft Fabric.
Check out the course: Course DP-601T00: Implementing a Lakehouse with Microsoft Fabric
You can now filter by product for both Fabric User Groups and Events in the Fabric Community site. With the broad range of experiences within Fabric, product filtering allows you to easily find User Groups or events that match your interests!
You can also quickly refilter your results by removing products either in the Product drop down or with the tiles below the search bar.
Data Activator now has its own community forum, alongside the other product forums across Fabric: https://community.fabric.microsoft.com/t5/Data-Activator-preview-Community/ct-p/dataactivator (or use the short URL https://aka.ms/dataActivatorCommunity!). If you’ve got any questions or feedback about Data Activator you can post there and the product team will be in touch.
We are excited to announce the launch of one of our most highly acclaimed features! Report creators can now create smoother line and area charts, providing a more polished look to their visualizations. To access this setting, go to Lines > Shape > Line Type.
We’ve recently added leader lines for both line and area charts. This new feature creates a visual connection between each data point and its corresponding label. To access this feature, simply navigate to the Data labels > Options > Leader lines.
These features are just the beginning of the many improvements we have in store for graphs, charts, plots, and markers in the coming months. Get ready for even more exciting updates!
The new on-object interaction feature released to preview back in March. This month we bring more improvements and bug fixes.
We’ve now added a new “+” button on the pane switcher to quickly add new panes directly from the pane switcher without having to go to the View ribbon. This menu also gives a brief description of what panes are available and what their functions are. Even better, the panes added to the switcher are saved across reports. Configure once and you’re done!
You can also access the 2 preference settings released last month for “always show the pane switcher” and re-attaching the build menu as a pane by using the gear icon.
In addition to the right click option “open in new pane”, it is now even easier to open multiple panes from the pane switcher by simply holding down the CTRL key and clicking the pane you wish to open.
- Overlap of the on-object buttons on the formula bar has finally been resolved! We appreciate your patience as this bug was a bit trickier to fix the right way.
- Visual tooltip showing automatically when opening the build menu, blocking the formatting on-object button is now fixed.
- Selected visual type is now reflected in the ribbon visual gallery accordingly.
- Mini-toolbar’s fill color icon now reflects conditional formatting gradient as well.
Thanks for continuing to try out the new preview and provide feedback. We’re working hard to react to your suggestions and add the necessary changes to make on-object work for you. Please continue to provide your comments directly in this blog post or in our community forum via the “Share feedback” button next to the preview switch.
The new data model editing in the Service feature released to preview in April. We’ve been busy reacting to your feedback and enhancing the experience. Below are the improvements we are adding this month:
We are adding relationship validation in the Service, making it easier to create and edit relationships in the web! Like Power BI Desktop, as you define the properties of your relationship, the system will automatically validate it and offer appropriate choices for cardinality and cross filter selections.
Please continue to submit your feedback directly in the comments of this blog post or in our feedback forum.
We are happy to announce the revamp of our dataset details page! Now, when you click on a dataset item in the OneLake data hub and workspace view, you will be directed to the redesigned page that not only enhances the look and feel but also introduces new capabilities for an improved user experience.
Here’s what you can expect to find on the dataset details page:
- Actions: You will find various actions that can be performed on the dataset, such as creating a report and refreshing the dataset. With this release, we have added the option to view the refresh history under the refresh menu.
- Dataset Metadata: Gain insights into the dataset through its description and last refresh time.
- Related Items: Explore existing related items associated with the dataset.
- Dataset Schema: Get a comprehensive view of the dataset’s tables and columns. Clicking on a table provides a table preview, with export capabilities available using paginated reports behind the scenes.
Additionally, we have made significant improvements to the related items list. It now showcases all the downstream and upstream dependencies for the dataset. This enhancement allows you to easily identify the sources of the dataset, composite model relations, reports, and dashboards associated with it.
We believe that these updates will greatly enhance your experience with the dataset details page, providing you with a more intuitive and comprehensive understanding of your data. We look forward to your feedback as you explore these new features!”
In the next Power BI Mobile app release, we are adding a long-waited feature that will help dataset owners and report creators to manage their dataset directly from their phone.
That means that you will be able to see in your mobile device datasets. Go to a workspace, make sure to select the “dataset” pill at the top and get the list of datasets, that you have access to in that workspace.
When tapping on a dataset, you will get the dataset metadata pane, which includes the name, owner, sensitivity label and also the latest refresh status. From this pane you can also trigger a dataset refresh – all directly from your mobile app!
Dataset owners will also get push notifications when schedule refresh fails. They will be able to view the failure details and be able to re-try the refresh while they are on-the-go.
We have recently published an article that focuses on techniques to improve the performance of custom visuals. In this article, we discuss the performance improvements we have made in visual rendering and load times.
We identified and addressed certain bottlenecks in the code, and these improvements are available for any visual that has been updated to API version 4.2 and onwards. Along with these fixes, we also provide code practices and techniques that can greatly enhance the performance of rendering custom visuals.
I encourage you to check out the article here. We believe that these techniques can make a significant impact on the performance of your custom visuals.
Drill Down Map PRO by ZoomCharts is a custom map visual for Power BI that lets you show your data on an interactive map and give it location-based context.
- Built-in shape layers – use preset shapes for easy filtering of countries.
- Custom shape layer support – provide custom shapes through KML and GeoJSON files.
- Lasso tool – draw and save your own filter shapes on top of the map.
- Node clustering capabilities – clusters can be turned into donut or pie charts for category display.
- Map base layer lets you choose from 4 options – Azure maps, Custom (OpenStreetMaps, Google, CartoDB etc.), Image (e.g., floor plans), None (visualize shapes without a background).
- Aura, image, and custom label support.
Popular use cases:
- Production – monitoring production data by location.
- Sales and marketing – mapping sales results by region.
- Public sector – visualizing environmental and sociodemographic data.
ZoomCharts Drill Down PRO Visuals are known for their interactive drilldowns, smooth animations, rich customization options. All Drill Down PRO Visuals support: touch input devices, interactions, custom and native tooltips, filtering, bookmarks, and context menu.
The Multi Target KPI card works with a single query and includes three additional indications, multiple categories, a pixel perfect alignment setting, and built-in conditional formatting.
You can change the settings of layout type and color conditional formatting for additional measures in our visual, and it is fairly simple for non-designers to use!
Just select the desired measure and category, if necessary. Add up to three additional indicators to provide the context you need for your metric.
It will help you improve reporting performance and save you time when designing and developing supplemental measures.
Start a new level of business dashboarding!
Link to AppSource:
Link to our website: (delete if not needed)
WebView2 is now generally available. Thanks to everyone who reported issues during the preview phase! Your input helped us improve reliability to higher than it was before we introduced WebView2. Please continue to report any issues using the “There was a problem with WebView2” dialog.