Introducing Autoscale Billing for Spark in Microsoft Fabric
We are introducing Autoscale Billing for Spark in Microsoft Fabric, a new billing model designed to offer greater flexibility and cost efficiency for Spark workloads. With this model, when enabled, your Spark workloads will no longer directly consume the Fabric capacity this billing option is enabled on; instead, they will run alongside your existing capacity (F2 or higher) and be billed on a pay-as-you-go basis in a serverless fashion, similarly to how billing for Spark works in Azure Synapse.
This addition does not replace the existing Fabric capacity model but rather complements it, giving organizations more options and control over how they allocate compute resources for Spark workloads.

Striking the right balance: Autoscale Billing vs. Capacity Model
Fabric’s capacity-based model provides predictable costs and a shared pool of resources for multiple workloads, ensuring efficiency and ease of management. With Autoscale Billing, we introduce a new option for teams that need dedicated, on-demand scaling for Spark jobs while still maintaining their base capacity for other workloads.
Feature | Capacity Model | Autoscale Billing for Spark |
Billing | Fixed cost per capacity tier | Pay-as-you-go for Spark jobs |
Scaling | Capacity shared across workloads | Spark scales independently |
Resource Contention | Possible contention between workloads | Dedicated resources for Spark |
Use Case | Best for predictable workloads | Best for dynamic, bursty Spark jobs |
By using both models strategically, organizations can optimize cost and performance—running predictable workloads on base capacity while leveraging Autoscale Billing for unpredictable, compute-intensive Spark jobs.
Key benefits of Autoscale Billing
✅ Cost efficiency – Pay only for the duration of Spark jobs, ensuring optimal compute spend.
✅ Independent scaling – Spark processing scales separately without impacting other Fabric workloads.
✅ Quota management integration – Request additional compute capacity as needed via Azure Quota Management.
✅ Dedicated resources for Spark – Eliminate contention with other workloads in shared capacity.
❝ Before autoscale we were regularly having to reallocate capacities due to dev teams running experiments and running out of capacity.
Autoscaling has arguably been the most impactful Fabric feature for us—providing simplified and more flexible cost management, enabling us to share more and manage centrally without daily interference, and delivering a significant operational benefit. ❞
Philip Withey
Head of Architecture, LSEG Microsoft Partnership, London Stock Exchange Group
How it works: managing Spark Workloads with Autoscale Billing
1️. Define usage limits for budget control
Fabric Capacity Admins can set a maximum CU limit for Autoscale Billing, ensuring costs align with budget and workload needs.
2️. Transparent resource allocation
Once enabled:
- Spark workloads will no longer consume shared Fabric capacity.
- Spark jobs will not burst or smooth from base capacity—they either queue (batch jobs) or throttle (interactive queries) once the max CU limit is reached.
- Users can track Spark job usage separately in the Fabric Capacity Metrics App.
3️. Monitor and optimize spend
Billing remains purely pay-as-you-go, with full visibility into compute expenses via the Azure Cost Analysis Tab. Users can filter by the meter ‘Autoscale for Spark Capacity Usage CU’ to monitor spending in real time.
New Autoscale Billing page in the Capacity Metrics App
We are also introducing a dedicated Autoscale Billing page within the Capacity Metrics App; this will allow users to:
- Track Spark job usage across workspaces.
- Monitor overall usage trends at different points in time.
- Drill down into specific operations to analyze job run duration and resource allocation.

Quota management for enterprise users
Enterprise users needing additional compute resources can request a quota increase through Azure Quota Management. Once approved, users can configure their autoscale limits directly from the Fabric Capacity Settings page, ensuring they have sufficient resources for large-scale Data Engineering workloads.

Note: This feature is currently being rolled out, starting with the UK South region. It will be available across all regions that support Data Engineering workloads by April 3rd.
Get started with Autoscale Billing for Spark
Learn more: Visit our documentation for a deep dive into Autoscale Billing.
Understand pricing: Use the Microsoft Fabric Pricing Calculator to estimate costs.
We’re excited to bring more flexibility and control to Spark workloads with Autoscale Billing, and we look forward to your feedback during the preview period.