We’re thrilled to announce the general availability (GA) of Autoscale Billing for Apache Spark in Microsoft Fabric — a serverless billing model designed to offer greater flexibility, transparency, and cost efficiency for running Spark workloads at scale.
With this model now fully supported, Spark Jobs can run independently of your Fabric capacity and are billed on a pay-as-you-go basis — similar to how Spark works in Azure Synapse. This gives teams the freedom to scale compute as needed without impacting other workloads running on your shared Fabric capacity.
Autoscale Billing complements the existing Fabric capacity model rather than replacing it, giving organizations the power to choose how they want to allocate compute for Spark workloads — whether predictable or dynamic.
Why Autoscale Billing?
Fabric’s capacity-based model offers predictable costs and a shared compute pool for a variety of workloads like Power BI, Dataflows, and Notebooks. Autoscale Billing, on the other hand, is designed for dynamic, bursty Spark scenarios where dedicated compute with elastic scaling is critical.
By combining both models strategically, teams can:
Keep predictable workloads on base capacity for fixed cost and simplicity.
Use Autoscale Billing for bursty or exploratory Spark workloads with job-based compute control.
Key benefits of Autoscale Billing
✅ Cost efficiency – Pay only for Spark job execution time period. No idle costs. ✅ Dedicated compute for Spark – Avoid resource contention with other Fabric workloads. ✅ Quota-aware controls – Monitor and manage quota via Azure Quota Management for configuring the Max CU limits.
New with GA: Subscription-level quota visibility
With general availability, Fabric admins now get enhanced visibility into CU (Capacity Unit) quota usage across the subscription. When configuring Autoscale Billing, you’ll see exactly how much of your quota is being consumed, and if you’re nearing the limit.
Autoscale Billing controls in Capacity Settings Page
This helps admins decide if and when to request additional quota, ensuring Spark Jobs run without interruption.
How it works: Managing Spark workloads with Autoscale Billing
Turn it on and set your limits – From the Fabric Capacity Settings page, toggle on Autoscale Billing for Spark and configure your max CU limit. This defines the upper bound of how much Spark compute can be consumed across all workspaces using this capacity.
2. Dedicated Spark compute – Once enabled, Spark Jobs:
Do not use your shared Fabric capacity.
Do not have bursting or smoothing applied.
Will queue (batch) or throttle (interactive) once the max CU limit is hit.
3. Track usage and cost easily – All Spark Jobs using Autoscale Billing show up in:
Azure Cost Analysis under the meter: Autoscale for Spark Capacity Usage CU
Fabric Capacity Metrics App, now updated with a dedicated Autoscale Billing page to track job activity, consumption trends, and resource allocation.
A quick comparison: Capacity vs. Autoscale Billing
Feature
Capacity Model
Autoscale Billing for Spark
Billing
Fixed cost per capacity tier
Pay-as-you-go for Spark jobs
Scaling
Capacity shared across workloads
Spark scales independently
Resource Contention
Possible contention between workloads
Dedicated max compute limits for Spark workloads
Compute Governance
Managed based on capacity SKU limits
Configure a Max CU limit and acquire additional compute quota from Azure Quotas
Use Case
Best for predictable workloads
Best for dynamic, bursty Spark jobs
Autoscale Billing vs. Spark Job Autoscale: What’s the difference?
It’s important to understand that Autoscale Billing for Spark is not the same as Spark autoscale.
Spark Autoscale is a job-level construct — it adjusts the number of executors during the job’s execution based on resource needs.
Autoscale Billing, on the other hand, is a capacity-level billing model — it controls where Spark jobs run and how they are billed, not how they scale within the job.
You can use both together: Run Spark Jobs on Autoscale Billing and let those jobs Autoscale executors internally based on data size and task distribution.
When to use Autoscale Billing?
Use Autoscale Billing when:
You want to isolate Spark compute from other Fabric workloads.
Your workloads are ad hoc, exploratory, or highly variable.
You need transparent cost tracking and budget control.
You want flexibility to scale Spark without upgrading base capacity tiers.
We’re excited to bring more flexibility, transparency, and compute control to Spark workloads in Microsoft Fabric. Try out Autoscale Billing today and share your feedback as we continue to make Data Engineering more powerful and intuitive.
Entradas de blog relacionadas
Autoscale Billing for Spark in Microsoft Fabric (Generally Available)
We’re introducing a set of new enhancements for Data Agent creators — designed to make it easier to debug, improve, and express your agent’s logic. Whether you’re tuning example queries, refining instructions, or validating performance, these updates make it faster to iterate and deliver high-quality experiences to your users. New Debugging Tools View referenced example …
Continue reading “Creator Improvements in the Data Agent”
Additional authors – Madhu Bhowal, Ashit Gosalia, Aniket Adnaik, Kevin Cheung, Sarah Battersby, Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial analysis. …
Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark (Generally Available)”