Orchestrate your Databricks Jobs with Fabric Data pipelines
We’re excited to announce that you can now orchestrate Azure Databricks Jobs from your Microsoft Fabric data pipelines!
Databrick Jobs allow you to schedule and orchestrate a task or multiple tasks in a workflow in your Databricks workspace. Since any operation in Databricks can be a task, this means you can now run anything in Databricks via FDF, such as serverless jobs, SQL tasks, Delta Live Tables, batch inferencing with model serving endpoints, or automatically publish and refreshing semantic models in the Power BI service.
With this new update, you’ll be able to trigger these workflows from your Fabric data pipeline. In your Azure Databricks activity, you’ll see a new Type to select from under your Settings tab called Job.

Once you have selected the Job type, you’ll be able to use the drop-down list to see all the Databricks jobs that you have created, allowing you to select a Job to run from your data pipeline.
We also know that being able to parameterize your data pipelines is important as it allows you to create generic reusable pipeline models.
Fabric Data Factory continues to provide end-to-end support for these patterns and is excited to extend this capability to the Azure Databricks activity.
After you select the Job type, you’ll also be able to send parameters to your Databricks job, allowing maximum flexibility and power for your orchestration jobs.

To learn more, read Azure Databricks activity – Microsoft Fabric | Microsoft Learn or watch this demo.
Have any questions or feedback? Leave a comment below!