Microsoft Fabric Updates Blog

Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric

Introducing predefined Spark resource profiles in Microsoft Fabric—making it easier than ever for data engineers to optimize their compute configurations based on workload needs. Whether you’re handling read-heavy, write-heavy, or mixed workloads, Fabric now provides a property bag-based approach that streamlines Spark tuning with just a simple setting.

With these new configurations, users can effortlessly define workspace defaults, eliminating the need for granular adjustments and ensuring peak performance for every workload type.

Available profiles

ProfileUse caseConfigurable setting
ReadHeavyForSparkOptimized for Spark workloads with frequent readsspark.fabric.resourceProfile=readHeavyForSpark
ReadHeavyForPBIOptimized for Power BI queries on Delta tablesspark.fabric.resourceProfile=readHeavyForPBI
WriteHeavyOptimized for high-frequency data ingestion & writesspark.fabric.resourceProfile=writeHeavy
CustomFully user-defined configurationspark.fabric.resourceProfile=custom
Resource Profile Configurations

Benefits of resource profiles

Performance optimization by default – No need for manual tuning; workloads run efficiently out of the box.
Flexibility – Switch between profiles based on workload needs.
Fine-tuned resource allocation – Ensures the best performance for reads, writes, and mixed workloads.


New Fabric Workspaces – optimized for write-heavy workloads

With the latest update, newly created Fabric workspaces now default to the writeHeavy profile. This ensures that workloads designed for data ingestion and large-scale ETL processes benefit from optimal performance right from the start.

✔ Performance by default – faster ingestion without extra tuning

✔ Optimized for Data Engineering jobs – ideal for large-scale data ingestion scenarios

Why this matters?

The WriteHeavy profile is specifically optimized for workloads where large volumes of data need to be written efficiently. This is perfect for teams working on:

  • Enterprise ETL pipelines
  • Data lake ingestion workflows
  • Streaming & batch data processing

Takeaway: New workspaces come pre-tuned for write-heavy scenarios, but users can adjust configurations anytime to fit their workloads by updating these configurations through Spark properties in Environments or by setting these configurations during runtime.

To learn more about the Resource Profile Configurations, please refer to the What is Apache Spark compute in Microsoft Fabric? documentation and join the conversation on the Fabric Community.

Entradas de blog relacionadas

Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric

noviembre 10, 2025 por Arun Ulagaratchagan

SQL is having its moment. From on-premises data centers to Azure Cloud Services to Microsoft Fabric, SQL has evolved into something far more powerful than many realize and it deserves the focused attention of a big stage.  That’s why I’m thrilled to announce SQLCon, a dedicated conference for database developers, database administrators, and database engineers. Co-located with FabCon for an unprecedented week of deep technical content … Continue reading “It’s Time! Announcing The Microsoft SQL Community Conference”

noviembre 3, 2025 por Arshad Ali

Additional authors – Madhu Bhowal, Ashit Gosalia, Aniket Adnaik, Kevin Cheung, Sarah Battersby, Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis. … Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)”