Microsoft Fabric Updates Blog

Fabric Espresso – Episodes about Performance Optimization & Compute Management in Microsoft Fabric

For the past 1.5 years, the Microsoft Fabric Product Group Product Managers have been publishing a YouTube series featuring deep dives into Microsoft Fabric’s features. These episodes cover both technical functionalities and real-world scenarios, providing insights into the product roadmap and the people driving innovation. With over 80+ episodes, the series serves as a valuable resource for anyone looking to understand and optimize their use of Microsoft Fabric.

All episodes are available at  https://aka.ms/fabric-espresso making it easy to explore the entire catalog. However, to enhance the learning experience, we are launching a short series of blog posts that will categorize the episodes into thematic groups, providing explanations and key takeaways for each.

Fabric Espresso & Performance Optimization & Compute Management

This week, we focus on Performance Optimization & Compute Management in Microsoft Fabric. Below is a curated list of episodes that explore various techniques for optimizing compute resources, tuning queries, and improving efficiency within Microsoft Fabric.

Key Episodes on Performance Optimization & Compute Management:

  1. High Concurrency Mode for Notebooks in Pipelines for Fabric Spark
    • Learn how shared, high-performance sessions cut down job runtimes dramatically.
  2. ML based Autotune for Apache Spark Jobs in MS Fabric performance optimization for recurrent jobs
    • Discover how predictive autotuning refines Spark configurations for optimized performance.
  3. Native execution engine for Apache Spark in Fabric
    • Explore the vectorized execution engine designed to boost query speed and efficiency.
  4. Spark Compute in Fabric Data Engineering and Data Science – Starter Pools vs Custom Pools Unveiled!
    • Compare resource provisioning options and understand the trade-offs between quick-start and tailored compute pools.
  5.  Fabric Apache Spark Autotune and Run Series Job Analysis in Monitoring Hub
    • Gain insights into automated tuning techniques and job performance diagnostics.
  6. Fabric Apache Spark Jobs monitoring capabilities – Resource Usage
    • Understand how detailed monitoring helps identify performance bottlenecks and resource inefficiencies.
  7. Fabric Spark Compute Capabilities – Azure VM’s and their impact on performance
    • See how leveraging Azure VM configurations can drive enhanced Spark performance.
  8. Performance best practices
    • Review essential strategies for optimizing query performance in your workspace.
  9. Microsoft Fabric Capacity Smoothing and Data Warehouse Throttling
    • Learn how capacity smoothing and throttling techniques ensure consistent performance under load.
  10. Caching in data warehousing
    • Discover how in-memory and SSD caching can significantly reduce query latency.
  11. Performance at Scale with Microsoft Fabric: Concurrency!
    • Explore how Fabric handles concurrency to maintain high performance even at scale.
  12. Performance at Scale with Microsoft Fabric: Query Optimizations!
    • Dive into techniques for optimizing query execution to boost efficiency.
  13. Performance at Scale with Microsoft Fabric: Query Processing!
    • Understand the underlying mechanics of query processing and performance tuning.

Related blog posts

Fabric Espresso – Episodes about Performance Optimization & Compute Management in Microsoft Fabric

April 9, 2026 by Sakshi Jain

Background Currently, many items rely on the item’s owner identity for accessing connections and certain features (like delegated mode in SQL endpoint) specific to the item. If the owner leaves the organization or their credentials expire, items can become partially or fully non-functional. We’ve heard from many of you that the current remediation path (like … Continue reading “Associated identities for items (Preview)”

April 8, 2026 by Ambika Jagadish

Accidental drops can happen anytime—ETL moves fast, schemas change, and mistakes happen. What matters is how fast and predictably you recover.