Microsoft Fabric Updates Blog

Announcing the latest innovations in Fabric Data Factory: Apache Airflow jobs and pipelines

Introduction

The world of data integration is rapidly evolving, and staying up to date with the latest technologies is crucial for organizations seeking to make the most of their data assets. Available now are the newest innovations in Fabric Data Factory pipelines and Apache Airflow job orchestration, designed to empower data engineers, architects, and analytics professionals with greater efficiency, flexibility, and scalability.

What’s new in Fabric Data Factory Apache Airflow jobs?

Airflow integration: Orchestrate with confidence

Apache Airflow jobs in Fabric Data Factory simplify running various Fabric items with native operator support.

You can execute artifacts like Notebooks, Spark job definitions, Pipelines, Semantic Models, and user data functions directly from your DAG.

Apache airflow job now also supports running copy jobs and dbt jobs!

Apache Airflow jobs in Fabric Data Factory with support for copy and dbt jobs when configuring a Fabric item run.

Figure: Apache Airflow jobs in Fabric Data Factory with support for copy and dbt jobs.

You can also now add code to run your Fabric items using a shortcut—just open the context menu and select Run Fabric Artifact.

Running a Fabric item from Apache Airflow using the context menu shortcut.

Figure: Running a Fabric item from Apache Airflow using the context menu shortcut.

Learn more about running Fabric artifacts with Airflow: Run a Fabric item using Apache Airflow DAG.

Public documentation links

New Airflow APIs: Elevate your orchestration experience

This release introduces the all-new Apache Airflow job APIs—opening a world of possibilities for workflow integration and automation. These robust APIs empower you to programmatically manage, monitor, and trigger Airflow DAG runs directly from your applications and services. These new APIs streamline cross-system communication and enable dynamic orchestration scenarios that adapt to your evolving data landscape.

Key benefits include accelerated development cycles, simplified integration with external tools, and enhanced automation capabilities for both scheduled and event-driven workflows. Now, you can seamlessly incorporate Airflow into your enterprise data pipelines, boosting process reliability and visibility.

Explore the Airflow APIs today and discover how they can help you orchestrate, monitor, and optimize your data workflows with unprecedented flexibility and control.

Learn more, with the API capabilities for Fabric Data Factory’s Apache Airflow Job documentation.

What’s new in Fabric Data Factory pipelines?

Fabric Data Factory pipelines are at the heart of modern data movement and transformation. The latest updates bring a host of new features aimed at simplifying complex workflows and enhancing productivity:

  • Enhanced Visual Pipeline Designer: The pipeline designer now offers a more intuitive drag-and-drop experience, enabling users to build, visualize, and debug their data flows with minimal effort.
  • Parameterization and Reusability: Pipelines now support advanced parameterization, making it possible to create dynamic, reusable templates for common ETL patterns.

Interval-based schedules (Preview)

Introducing interval-based schedules, the latest enhancement to Fabric Data Factory pipelines!

This powerful new feature allows you to automate data workflows at regular non-overlapping intervals, like the popular tumbling window trigger in Azure Data Factory. With interval-based scheduling, you can easily configure recurring pipeline runs that ensure timely data processing and seamless integration across your architecture. Experience greater control and flexibility for managing time-dependent ETL workloads—unlock a new level of efficiency in your modern data integration journey.

Interval-based schedule configuration for Fabric Data Factory pipelines (Public Preview).

Figure: Interval-based schedule configuration for Fabric Data Factory pipelines (Preview).

Get started today!

We invite you to explore these latest innovations in Fabric Data Factory pipelines and Apache airflow jobs. Whether you are modernizing your existing ETL processes or building new data applications from scratch, these enhancements will help you deliver business value faster and more reliably.

Stay tuned for more updates, tutorials, and best practices as we continue to innovate and support your data integration journey. For detailed documentation and hands-on guides, visit the official Fabric Data Factory page.

Join the conversation

We’d love to hear how you are using these new features and what challenges you are solving. Share your stories, ask questions, and connect with the community in the comments!

Related blog posts

Announcing the latest innovations in Fabric Data Factory: Apache Airflow jobs and pipelines

April 20, 2026 by Penny Zhou

Coordinating dbt runs with upstream ingestion and downstream consumption often requires complex solutions and different tools. You can now add a dbt job activity (Preview) directly to your Fabric pipelines. This lets you orchestrate dbt transformations alongside other pipeline activities, so you can build end-to-end data workflows without switching tools. Why this matters Run dbt … Continue reading “Orchestrate dbt jobs activity in your Fabric pipelines (Preview)”

April 16, 2026 by Tom Peplow

Have you ever tried to understand what’s stored in your Fabric items? Would you even know where to begin? I had 92,000 UK property transactions sitting in an open mirrored database. Rather than spending hours sorting through documentation, I just asked my AI agent: “Document what’s in the House Price Open Mirror in my UK … Continue reading “Give your AI agent the keys to OneLake: OneLake MCP (Generally Available)”