Microsoft Fabric Updates Blog

Enhanced Monitoring for Spark High Concurrency Workloads in Microsoft Fabric

We’ve completed a set of improvements to the monitoring experience for Notebooks running in high concurrency mode, whether triggered manually or as part of a pipeline using the high concurrency execution model. These updates provide deeper visibility into Spark applications, improve observability across multiple Notebooks, and enable more efficient debugging and performance tuning.

New Enhancements in the Spark Application Monitoring Detail Page

We’ve introduced several key enhancements to the Spark application detail view to support high concurrency workloads more effectively:

Jobs Tab: Detailed Job-Level Insights

In the Jobs tab, you can now drill into individual Spark jobs executed under a high concurrency application.

Key improvements include:

  • Notebook Context: For applications running multiple Notebooks, the Notebook name is now shown alongside each job.
  • Code Snippet View: Click on the code snippet icon to view and copy the job-related code.
  • Filtering: Filter Spark jobs by Notebook to focus on one or more Notebooks within the session.

Logs Tab: Notebook-Aware Logging

To support easier debugging in high concurrency Spark sessions:

  • Notebook ID Prefixing: Each log entry now includes the Notebook ID prefix, making it easier to associate logs with specific Notebooks.
  • Notebook Filtering: Use the filters to view logs by Notebook, allowing more targeted inspection of log output across collaborative or parallel runs.

Item Snapshots Tab: Hierarchical Notebook View

The Item Snapshots tab introduces a hierarchical tree view of all Notebooks participating in a shared high concurrency Spark session:

  • Browse All Notebooks: View snapshots of both completed and in-progress Notebook runs within the shared Spark session.
  • Snapshot Details for each Notebook:
    • Code at time of submission.
    • Execution status per cell.
    • Output for each cell.
    • Input Parameters for the Notebook.
  • Pipeline Integration: If the Spark application is part of a pipeline, you’ll also see the related pipeline and Spark activity displayed for easier traceability.

Start Exploring Today

These enhancements are now optimized for multi-Notebook awareness, allowing you to monitor high-concurrency Spark workloads with more granular, per-Notebook insights.

Refer to the full documentation Apache Spark application detail monitoring – Microsoft Fabric for more information.

Liittyvät blogikirjoitukset

Enhanced Monitoring for Spark High Concurrency Workloads in Microsoft Fabric

helmikuuta 23, 2026 tekijä Ankita Victor-Levi

Introduction In today’s data landscape, as organizations scale their analytical workloads, the demand for faster, more cost-efficient computation continues to rise. Apache Spark has long been the backbone of largescale data processing with its in‑memory processing and powerful APIs, but today’s workloads demand even better performance. Microsoft Fabric addresses this challenge with the Native Execution … Continue reading “Under the hood: an introduction to the Native Execution Engine for Microsoft Fabric”

helmikuuta 3, 2026 tekijä Bogdan Crivat

As executives plan the next phase of their data and AI transformation, the bar for analytics infrastructure continues to rise. Enterprises are expected to support traditional business intelligence, increasingly complex analytics, and a new generation of AI-driven workloads—often on the same data, at the same time, and with far greater expectations for speed and cost … Continue reading “A turning point for enterprise data warehousing “