Microsoft Fabric Updates Blog

Igniting Your Pipelines: New Data Factory Features Announced at Ignite

The Microsoft Ignite conference gives us a chance to announce several new and exciting features in Microsoft’s Data Factory data integration service in Microsoft Fabric. Pipelines represent the code-free data orchestration feature in Fabric while Airflow Jobs represent the code-first approach to data pipelines called ‘DAGs’. These amazing new features will empower you to take your data orchestration to the next level!

In Microsoft Fabric, we deeply understand how important observability is so that you can effectively and easily manage and track your pipeline processes. We have updated the Monitor Hub views in Fabric to make it easy to view upstream and downstream activation of your pipelines and activities in your pipelines. This new hierarchical view will be very useful to understand impact analysis and troubleshoot run failures.

At the last Fabric Conference in Vienna, we announced a powerful new capability for pipeline developers to help building pipeline expressions with an expression builder evaluator. Now we’ve taken an ever-larger leap forward for the ease-of-use focus on building pipeline expressions for pipeline developers.

Pipeline Expression Builder

Just express what you want to do with your pipeline activities in natural language and let Copilot generate the code for you! You can even use context-aware prompts like ‘use the output of the previous activity’.

Refer to the documentation to learn more about this amazing new Copilot for Data Factory pipeline expression AI feature.

For our Airflow code-first users, we have enabled new APIs and even added a new UI gesture in the built-in Airflow code editor in Fabric to upload files into your Fabric Apache Airflow job project:

Coming Soon!

  • Interval-based schedules: In ADF, a very common way to trigger your pipelines is via tumbling window triggers. These trigger types can be used to establish repeat time windows (called time slices) that you can manage and monitor. They are non-overlapping and not tied to specific wall-clock or CRON style jobs. We showed a sneak-peek of this feature at Ignite which we call Interval-based schedules in Fabric.
  • Lakehouse Maintenance activity: In the Fabric Lakehouse, business data is default stored as Parquet and managed by Delta. To maintain performance and optimize the layout of your data in the Parquet files and Delta tables, we are bringing new vacuum and optimize pipeline activities to Fabric as ‘Fabric Lakehouse Maintenance’ activities. We showed a sneak peek of how you can schedule and automate your Lakehouse maintenance from a pipeline.

Thank you to all of our amazing customers and community who drive us toward these incredible innovations in Data Factory! We look forward to hearing your feedback and success stories in Fabric Data Factory.

Related blog posts

Igniting Your Pipelines: New Data Factory Features Announced at Ignite

March 31, 2026 by Mark Kromer

Introduction The world of data integration is rapidly evolving, and staying up to date with the latest technologies is crucial for organizations seeking to make the most of their data assets. Available now are the newest innovations in Fabric Data Factory pipelines and Apache Airflow job orchestration, designed to empower data engineers, architects, and analytics … Continue reading “Announcing the latest innovations in Fabric Data Factory: Apache Airflow jobs and pipelines”

March 30, 2026 by Mark Kromer

Moving your Azure Data Factory and Azure Synapse pipelines to Fabric Data Factory is now fast, easy, and free!