Enhanced Monitoring for Spark High Concurrency Workloads in Microsoft Fabric
We’ve completed a set of improvements to the monitoring experience for Notebooks running in high concurrency mode, whether triggered manually or as part of a pipeline using the high concurrency execution model. These updates provide deeper visibility into Spark applications, improve observability across multiple Notebooks, and enable more efficient debugging and performance tuning.
New Enhancements in the Spark Application Monitoring Detail Page
We’ve introduced several key enhancements to the Spark application detail view to support high concurrency workloads more effectively:
Jobs Tab: Detailed Job-Level Insights
In the Jobs tab, you can now drill into individual Spark jobs executed under a high concurrency application.
Key improvements include:
- Notebook Context: For applications running multiple Notebooks, the Notebook name is now shown alongside each job.
- Code Snippet View: Click on the code snippet icon to view and copy the job-related code.
- Filtering: Filter Spark jobs by Notebook to focus on one or more Notebooks within the session.
Logs Tab: Notebook-Aware Logging
To support easier debugging in high concurrency Spark sessions:
- Notebook ID Prefixing: Each log entry now includes the Notebook ID prefix, making it easier to associate logs with specific Notebooks.
- Notebook Filtering: Use the filters to view logs by Notebook, allowing more targeted inspection of log output across collaborative or parallel runs.
Item Snapshots Tab: Hierarchical Notebook View
The Item Snapshots tab introduces a hierarchical tree view of all Notebooks participating in a shared high concurrency Spark session:
- Browse All Notebooks: View snapshots of both completed and in-progress Notebook runs within the shared Spark session.
- Snapshot Details for each Notebook:
- Code at time of submission.
- Execution status per cell.
- Output for each cell.
- Input Parameters for the Notebook.
- Pipeline Integration: If the Spark application is part of a pipeline, you’ll also see the related pipeline and Spark activity displayed for easier traceability.
Start Exploring Today
These enhancements are now optimized for multi-Notebook awareness, allowing you to monitor high-concurrency Spark workloads with more granular, per-Notebook insights.
Refer to the full documentation Apache Spark application detail monitoring – Microsoft Fabric for more information.