Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric
Introducing predefined Spark resource profiles in Microsoft Fabric—making it easier than ever for data engineers to optimize their compute configurations based on workload needs. Whether you’re handling read-heavy, write-heavy, or mixed workloads, Fabric now provides a property bag-based approach that streamlines Spark tuning with just a simple setting.
With these new configurations, users can effortlessly define workspace defaults, eliminating the need for granular adjustments and ensuring peak performance for every workload type.
Available profiles
Profile | Use case | Configurable setting |
---|---|---|
ReadHeavyForSpark | Optimized for Spark workloads with frequent reads | spark.fabric.resourceProfile=readHeavyForSpark |
ReadHeavyForPBI | Optimized for Power BI queries on Delta tables | spark.fabric.resourceProfile=readHeavyForPBI |
WriteHeavy | Optimized for high-frequency data ingestion & writes | spark.fabric.resourceProfile=writeHeavy |
Custom | Fully user-defined configuration | spark.fabric.resourceProfile=custom |

Benefits of resource profiles
✔ Performance optimization by default – No need for manual tuning; workloads run efficiently out of the box.
✔ Flexibility – Switch between profiles based on workload needs.
✔ Fine-tuned resource allocation – Ensures the best performance for reads, writes, and mixed workloads.
New Fabric Workspaces – optimized for write-heavy workloads
With the latest update, newly created Fabric workspaces now default to the writeHeavy
profile. This ensures that workloads designed for data ingestion and large-scale ETL processes benefit from optimal performance right from the start.
✔ Performance by default – faster ingestion without extra tuning
✔ Optimized for Data Engineering jobs – ideal for large-scale data ingestion scenarios
Why this matters?
The WriteHeavy profile is specifically optimized for workloads where large volumes of data need to be written efficiently. This is perfect for teams working on:
- Enterprise ETL pipelines
- Data lake ingestion workflows
- Streaming & batch data processing
Takeaway: New workspaces come pre-tuned for write-heavy scenarios, but users can adjust configurations anytime to fit their workloads by updating these configurations through Spark properties in Environments or by setting these configurations during runtime.
To learn more about the Resource Profile Configurations, please refer to the What is Apache Spark compute in Microsoft Fabric? documentation and join the conversation on the Fabric Community.