Microsoft Fabric Updates Blog

Adaptive time series visualization at scale with Microsoft Fabric

Coauthor: Slava Trofimov

How much value would you generate for your enterprise if you could enable every user to unlock actionable insights in high-volume time-series data from your operations with real time, interactive exploration?

Industrial operations generate staggering amounts of time series data. A single plant can easily produce tens of billions of sensor readings per month, and organizations across manufacturing, energy, utilities, and large scale IoT deployments face the same challenge: how do you let users explore this data interactively in real time?

Traditional approaches struggle with high data volumes, latency, performance bottlenecks, rigid data models, and complex tools that require deep technical skills.

That’s where an innovative Microsoft Fabric-native design pattern comes in. By combining KQL databases with Power BI, you can deliver a fast, flexible, scalable time series visualization experience that works for everyone—from plant engineers to data scientists.

User interactively explores trends in time series data in a Power BI report by interactively brushing over the time series slicer, instantly updating report visuals to reveal detailed patterns and anomalies for selected tas over the selected period.

Animated GIF that illustrates the interactive user experience offered by this design pattern.

Why this design pattern works

This design pattern provides intuitive, interactive Fabric-native experiences for any user:

  • Intelligent time binning: Handle billions of data points by automatically grouping them into optimal intervals.
  • Time brushing: Zoom in any period with drag-and-select interactions.
  • Multi-metric comparison: View multiple time series side by side across different assets.
  • Flexible aggregation: Switch between average, min, max, and sum with a single selection.
  • Anomaly detection: KQL queries detect unusual patterns in your time series with no ML expertise required.
  • Statistical insights: View descriptive statistics and correlations.
  • Contextualization: Bring asset hierarchies, tag metadata, and definitions directly into the report for richer interpretation.

The key components

This design brings together complementary strengths of Fabric components. Together, these components create a seamless, highly interactive experience over massive datasets without sacrificing performance.

  • Semantic model in DirectQuery mode delegates query execution to a KQL database, allowing you to work with data at scale and in real-time.
  • KQL database does the heavy lifting of data ingestion, storing billions of events, constructing time series, aggregating data, detecting anomalies and fast query processing.
  • Dynamic M query parameters pass user inputs from slicers and filters in the report to the Power Query engine, which constructs custom queries that are sent to the KQL database.
  • Power Query functions automatically calculate optimal time bins based on your selected date range. Viewing a year? It aggregates to daily bins. Zoom in to a single day? It shifts to minute-level granularity. This adaptive binning seamlessly handles billions of data points.
  • Field parameters allow report viewers to adjust chart layout to match their needs.
  • Time brush custom visual lets you select any timeline portion to zoom into. The visual outputs the selected time range, triggering automatic recalculation of bin sizes and retrieval of data for the selected time range.

How it works

The following diagram shows how user interactions with the Power BI report trigger interactions between Power BI visuals, Power Query, and KQL Databases.

Diagram shows how user interactions with the Power BI report trigger interactions between Power BI visuals, Power Query, and KQL databases.

  1. User configures initial report selections — The user selects assets/tags, defines a time range, and sets aggregation options using Power BI’s native filters and slicers. These selections determine the scope of data to analyze.
  2. Parameters flow to Power Query — Each slicer and filter is bound to a Dynamic M query parameter. When the user selects, Power BI report passes these values to the Power Query engine in the semantic model.
  3. Power Query constructs optimized KQL queries — Custom Power Query functions parse the incoming parameters and dynamically build KQL queries. This includes computing optimal time bin sizes based on the selected range (e.g., 1-minute bins for a 1-hour window, 1-hour bins for a 1-month window) to keep result sets manageable.
  4. Power Query submits queries to the KQL database — Custom KQL queries constructed in the previous step are submitted to the KQL database.
  5. KQL database executes queries — The query runs entirely within the KQL database, leveraging native time series functions like make-series, series_decompose_anomalies, and summarize. Filtering, aggregation, and anomaly detection all happen at the source.
  6. Aggregated results return to Power BI — Only the processed, summarized results are transferred to Power BI. For a million-point time series, this might be just a few hundred aggregated bins.
  7. Time Series Brush Slicer renders the overview — The time series brush slicer displays data for the full-time range, providing visual context and enabling the user to identify periods of interest.
  8. User brushes to select a narrower time range — By selecting and dragging the time series brush slicer, the user selects a specific portion of the time range for detailed analysis. The slicer outputs the start and end timestamps of the selected range as a formatted text string.
  9. Selected range is passed to the Power Query engine — The time series brush slicer’s output is bound to a dynamic M query parameter, which triggers Power Query to rebuild relevant KQL queries with the narrower time window.
  10. Power Query constructs optimized KQL queries — Power Query parses the incoming parameters and dynamically builds KQL queries that reflect the selected time window. Because the time range is smaller, Power Query calculates a finer time granularity (smaller bins), returning more detailed data for the selected period while keeping the result set manageable.
  11. Power Query submits queries to the KQL database — Custom KQL queries constructed in the previous step are submitted to the KQL database.
  12. KQL database executes queries — Queries for the selected time range are executed on the KQL database, leveraging native time series functions as well as filtering and aggregation capabilities.
  13. Finer-grained data returns to Power BI — Summarized and enriched results for the selected time frame return from the KQL database to Power BI.
  14. Updated data is delivered to report visuals — Line charts, statistics tables, and other visuals receive the updated finer-grained data.
  15. Report visuals are rendered to enable detailed analysis — Power BI report visuals are re-rendered to show the detailed view of the selected time range, including any detected anomalies highlighted for investigation.

Considering the iterative nature of interactive analytics, repeat this sequence of steps as you further refine selections, change tags, adjust time ranges, or drill into anomalies.

Get started in minutes

You can start using this end‑to‑end pattern in minutes with a convenient solution accelerator. Start with a pre-built report based on publicly available sample data. Then, follow the instructions to adapt the sample solution to work with your own data.

Note that the solution accelerator also includes a more advanced sample, which offers additional flexibility, but requires more effort and expertise to adapt to your needs.

Resources

Related blog posts

Adaptive time series visualization at scale with Microsoft Fabric

March 4, 2026 by Amir Jafari

We’re announcing an update to the permissions required to interact with semantic models in Fabric data agents. Today: As a creator, you must have access to the workspace where the semantic model lives and Build permission on the semantic model to add semantic model to a data agent. As a consumer, you need Read access … Continue reading “Update to required permissions for Semantic Models in Fabric Data Agents”

February 25, 2026 by Katie Murray

Welcome to the February 2026 Microsoft Fabric update! This month brings a wide range of enhancements across the Fabric platform—from improvements to the OneLake Catalog and developer experiences, to meaningful updates in Data Engineering, Data Factory, Real‑Time Intelligence, and more. Whether you’re building, operating, or scaling solutions in Fabric, there’s plenty here to explore. And … Continue reading “Fabric February 2026 Feature Summary”