Microsoft Fabric Updates Blog

Fabric Spark Monitoring APIs (Generally Available)

Fabric Spark Monitoring APIs offers powerful new observability capabilities and enhanced automation to Spark workloads within Microsoft Fabric!


Based on your feedback, here’s what’s new!

We’ve continued refining the APIs to meet evolving user needs.

Spark Advisor API – Provides recommendations and skew diagnostics to help identify bottlenecks and optimize performance.

Resource Usage API – Offers granular metrics on vCore allocation and utilization for executors within a Spark application.

Advanced Filtering Support – The workspace-level API now supports filtering capabilities to help users narrow down applications by:

  • Time range
  • Submitter
  • Application state (e.g., Succeeded, Failed, Running), and more!

This enhancement allows for more efficient analysis and targeted troubleshooting in large-scale environments.

New Application-Level Properties for Deeper Insight

To support more transparent resource planning and monitoring, the following properties have been added to Spark Monitoring APIs. These new fields help teams better understand and optimize their Spark resource allocations.

  • Driver Cores & Memory
  • Executor Cores & Memory
  • Number of Executors
  • Dynamic Allocation Enabled
  • Dynamic Allocation Max Executors

Empowering Spark Observability in Microsoft Fabric

Fabric Spark Monitoring APIs are now production-ready—providing a comprehensive solution for monitoring, diagnosing, and optimizing Spark workloads in a scalable, automated fashion.

We’re grateful for the community feedback that helped shape this release and remain committed to continuous improvement. Try out the GA APIs today and stay tuned for even more innovations!

For more information, check out the Monitor Spark applications using Spark monitoring APIs documentation.

Related blog posts

Fabric Spark Monitoring APIs (Generally Available)

April 21, 2026 by Hasan Abo Shally

Something fundamental is changing in how developers interact with data platforms. Not a feature update, not a UI refresh, but a shift in the interface itself.

April 20, 2026 by Penny Zhou

Coordinating dbt runs with upstream ingestion and downstream consumption often requires complex solutions and different tools. You can now add a dbt job activity (Preview) directly to your Fabric pipelines. This lets you orchestrate dbt transformations alongside other pipeline activities, so you can build end-to-end data workflows without switching tools. Why this matters Run dbt … Continue reading “Orchestrate dbt jobs activity in your Fabric pipelines (Preview)”