Introducing Capacity Pools for Data Engineering and Data Science in Microsoft Fabric
We are excited to announce the Capacity Pools for Data Engineering and Data Science in Microsoft Fabric. As part of the Data Engineering and Science settings in the Admin portal, capacity administrators can create custom pools based on their workload requirements.
Optimizing Cloud Spend and Managing Compute Resources
In enterprise environments, managing cloud spending and optimizing compute resources is a constant concern for capacity administrators. They often grapple with the potential for overspending due to individual workspace admin configurations. Capacity Pools address this challenge by providing granular control over compute resources.
Capacity administrators can now:
- Create custom pools within their Fabric capacity based on their workload needs.
- Enable or disable workspace-level compute customization. This allows them to lock down configurations to specific pool types (e.g., Small, Medium, Large, X-large or XX-Large) for a team’s data engineering tasks, preventing workspace admins and members from modifying compute settings within workspaces or environments.
These custom pools become readily available as Spark Pool options within Workspace Spark Settings and environment items. Users can then leverage these pools to execute their Data Engineering and Data Science jobs.
Creating a Spark Pool from Capacity Settings:
You can give your Spark pool a name and choose how many and how large the nodes (the machines that do the work) are. You can also tell Spark how to adjust the number of nodes depending on how much work you have. Creating a Spark pool is free; you only pay when you run a Spark job on the pool, and then Spark sets up the nodes for you. Once its saved, it will be available as a compute option in all the workspaces and its environment attached to this Fabric capacity.
Note: The pools created from the Capacity settings will take about three minutes to start spark sessions unlike Starter Pools.
The billing structure for custom pools created from Capacity settings aligns with that of custom pools created by workspace admins. You only incur charges when an active Spark session is running a notebook or Spark job definition within the pool. Billing is based solely on the duration of your job runs. There are no additional charges for stages like cluster creation, deallocation after job completion, or acquiring cluster instances from the cloud.
For instance, submitting a notebook job to a custom Spark pool will only result in charges during the active session period. Billing for the notebook session ceases once the Spark session stops or expires. There are no additional charges for acquiring cluster instances or initializing the Spark context.
These new admin and compute governance controls empower administrators to ensure consistent compute environments across workspaces and teams and streamlines resource management.
To learn more about how to create pools from capacity settings, please refer to our documentation Configure data engineering and science capacity admin settings – Microsoft Fabric | Microsoft Learn
To learn more about the Spark compute options in Microsoft Fabric, please refer to our documentation Apache Spark compute for Data Engineering and Data Science – Microsoft Fabric | Microsoft Learn
To learn more about the Billing and Capacity Management for Spark in Microsoft Fabric, please refer to our documentation Billing and utilization reporting in Apache Spark for Fabric – Microsoft Fabric | Microsoft Learn