Microsoft Fabric Updates Blog

Igniting Your Pipelines: New Data Factory Features Announced at Ignite

The Microsoft Ignite conference gives us a chance to announce several new and exciting features in Microsoft’s Data Factory data integration service in Microsoft Fabric. Pipelines represent the code-free data orchestration feature in Fabric while Airflow Jobs represent the code-first approach to data pipelines called ‘DAGs’. These amazing new features will empower you to take your data orchestration to the next level!

In Microsoft Fabric, we deeply understand how important observability is so that you can effectively and easily manage and track your pipeline processes. We have updated the Monitor Hub views in Fabric to make it easy to view upstream and downstream activation of your pipelines and activities in your pipelines. This new hierarchical view will be very useful to understand impact analysis and troubleshoot run failures.

At the last Fabric Conference in Vienna, we announced a powerful new capability for pipeline developers to help building pipeline expressions with an expression builder evaluator. Now we’ve taken an ever-larger leap forward for the ease-of-use focus on building pipeline expressions for pipeline developers.

Pipeline Expression Builder

Just express what you want to do with your pipeline activities in natural language and let Copilot generate the code for you! You can even use context-aware prompts like ‘use the output of the previous activity’.

Refer to the documentation to learn more about this amazing new Copilot for Data Factory pipeline expression AI feature.

For our Airflow code-first users, we have enabled new APIs and even added a new UI gesture in the built-in Airflow code editor in Fabric to upload files into your Fabric Apache Airflow job project:

Coming Soon!

  • Interval-based schedules: In ADF, a very common way to trigger your pipelines is via tumbling window triggers. These trigger types can be used to establish repeat time windows (called time slices) that you can manage and monitor. They are non-overlapping and not tied to specific wall-clock or CRON style jobs. We showed a sneak-peek of this feature at Ignite which we call Interval-based schedules in Fabric.
  • Lakehouse Maintenance activity: In the Fabric Lakehouse, business data is default stored as Parquet and managed by Delta. To maintain performance and optimize the layout of your data in the Parquet files and Delta tables, we are bringing new vacuum and optimize pipeline activities to Fabric as ‘Fabric Lakehouse Maintenance’ activities. We showed a sneak peek of how you can schedule and automate your Lakehouse maintenance from a pipeline.

Thank you to all of our amazing customers and community who drive us toward these incredible innovations in Data Factory! We look forward to hearing your feedback and success stories in Fabric Data Factory.

Related blog posts

Igniting Your Pipelines: New Data Factory Features Announced at Ignite

February 17, 2026 by Penny Zhou

Coauthor: Abhishek Narain Ensuring secure connectivity to your data sources is critical for modern data estates. We have released the Key-Pair authentication for Snowflake connections Preview in October and are happy to announce it is now Generally Available (GA). This release offers an enhanced security alternative to basic authentication methods, such as username and password, … Continue reading “Snowflake Key-Pair Authentication (Generally Available)”

February 9, 2026 by Leo Li

The On-premises Data Gateway manual update feature is now available in preview! This new capability simplifies gateway maintenance and helps you keep your environment secure and up to date with greater flexibility and control. With this new feature, administrators can now manually trigger updates—either directly through the gateway UI or programmatically via API or script. … Continue reading “Manual update for on-premises data gateway (Preview)”