Microsoft Fabric Updates Blog

Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric

Introducing predefined Spark resource profiles in Microsoft Fabric—making it easier than ever for data engineers to optimize their compute configurations based on workload needs. Whether you’re handling read-heavy, write-heavy, or mixed workloads, Fabric now provides a property bag-based approach that streamlines Spark tuning with just a simple setting.

With these new configurations, users can effortlessly define workspace defaults, eliminating the need for granular adjustments and ensuring peak performance for every workload type.

Available profiles

ProfileUse caseConfigurable setting
ReadHeavyForSparkOptimized for Spark workloads with frequent readsspark.fabric.resourceProfile=readHeavyForSpark
ReadHeavyForPBIOptimized for Power BI queries on Delta tablesspark.fabric.resourceProfile=readHeavyForPBI
WriteHeavyOptimized for high-frequency data ingestion & writesspark.fabric.resourceProfile=writeHeavy
CustomFully user-defined configurationspark.fabric.resourceProfile=custom
Resource Profile Configurations

Benefits of resource profiles

Performance optimization by default – No need for manual tuning; workloads run efficiently out of the box.
Flexibility – Switch between profiles based on workload needs.
Fine-tuned resource allocation – Ensures the best performance for reads, writes, and mixed workloads.


New Fabric Workspaces – optimized for write-heavy workloads

With the latest update, newly created Fabric workspaces now default to the writeHeavy profile. This ensures that workloads designed for data ingestion and large-scale ETL processes benefit from optimal performance right from the start.

✔ Performance by default – faster ingestion without extra tuning

✔ Optimized for Data Engineering jobs – ideal for large-scale data ingestion scenarios

Why this matters?

The WriteHeavy profile is specifically optimized for workloads where large volumes of data need to be written efficiently. This is perfect for teams working on:

  • Enterprise ETL pipelines
  • Data lake ingestion workflows
  • Streaming & batch data processing

Takeaway: New workspaces come pre-tuned for write-heavy scenarios, but users can adjust configurations anytime to fit their workloads by updating these configurations through Spark properties in Environments or by setting these configurations during runtime.

To learn more about the Resource Profile Configurations, please refer to the What is Apache Spark compute in Microsoft Fabric? documentation and join the conversation on the Fabric Community.

Relaterade blogginlägg

Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric

juli 10, 2025 från Matthew Hicks

Effortlessly read Delta Lake tables using Apache Iceberg readers Microsoft Fabric is a unified, SaaS data and analytics platform designed for the era of AI. All workloads in Microsoft Fabric use Delta Lake as the standard, open-source table format. With Microsoft OneLake, Fabric’s unified SaaS data lake, customers can unify their data estate across multiple … Continue reading “New in OneLake: Access your Delta Lake tables as Iceberg automatically (Preview)”

juli 10, 2025 från Vaibhav Shrivastava

A new feature has been added to Eventstream—the SQL Operator—which enables real-time data transformation within the platform. Whether you’re filtering, aggregating, or joining data streams, or handling complex data transformation needs like conditional logic, nested expression, string manipulation etc. SQL Operator gives you the flexibility and control to craft custom transformations using the language you … Continue reading “From Clicks to Code: SQL Operator under Fabric Eventstream (Preview)”