Microsoft Fabric Updates Blog

Data Factory Increases Maximum Activities Per Pipeline to 80

Data Factory pipeline developer create exciting and interesting data integration and ETL workflows for their data analytics projects. Because Data Factory is a platform service that is shared across ADF, Synapse, and Fabric, we had been limiting the number of activities in a single pipeline to 40 as a way to avoid resource exhaustion.

However, just this week, we have doubled the limit on number of activities you may define in a pipeline from 40 to 80. With more freedom to develop, we want to empower you to create more powerful, versatile, and resilient data pipelines for all your business needs. We are excited to see what you come up with, harnessing the power of 40 more activities per pipeline!

What’s the limit about & why did we raise it?

To ensure the resiliency and reliability of data pipelines, Data Factory places a limit on maximum number of activities that a pipeline may define. For the longest time, the limit has always been 40 activities per pipeline. Today, we are doubling it to 80, with future plans to raise it even further for our developers. The limit applies to the number of activities defined, not actually run. For instance, in the following example with conditional branching, there are 3 activities defined, even though, realistically speaking, in any pipeline run, only 2 will actually run.

Conditional Branching in Pipeline

We understand that our customers want to build resilient and useful data pipelines for their business needs, and sometimes, the 40 activities limit may come in the way of development. Hence, we are doubling the ceiling limit and giving you 40 more activities in a pipeline. 

When to add more activities?

We strongly encourage customers to use the additional 40 activities to build error handling capabilities. For instance, send an email to my on-call alias when Copy activities failed, otherwise proceed.

Simple Error Handling If Else Scenario

Or build a try-catch block that attempts to move the data if it’s ready or move on otherwise. 

Try Catch Block – only attempts to run the first activity.

Build for Resilience and Retries!

We do not, however, encourage you to build a sequential pipeline, with 80 activities one after another. Please be aware that data pipelines, just like any other piece of software, can sometimes encounter failures. For instance, when the connection to your SQL server is throttling, and a copy activity cannot complete in time.

In those cases, you need to retry and restart the pipeline. Please bear this information in mind, as you develop your pipeline: keep the actual steps within a pipeline to a reasonable amount. Production engineers will thank you to keep their lives simple 🙂

Final Thoughts

With the power of data pipelines, we want you to be able to build and deliver business impact for your end users. We excited to see what you come up with, harnessing the power of 40 more activities.

Have any questions or feedback? Leave a comment below!

רשומות קשורות בבלוג

Data Factory Increases Maximum Activities Per Pipeline to 80

אוקטובר 30, 2024 לפי Patrick LeBlanc

Welcome to the October 2024 Update! Here are a few, select highlights of the many we have for Fabric this month. API for GraphQL support for Service Principal Names (SPNs). Introducing a powerful new feature in Lakehouses: Sorting, Filtering, and Searching capabilities. An addition to KQL Queryset that will revolutionize the way you interact with … Continue reading “Fabric October 2024 Monthly Update”

אוקטובר 29, 2024 לפי Leo Li

We’re excited to announce several powerful updates to the Virtual Network (VNET) Data Gateway, designed to further enhance performance and improve the overall user experience. These new features allow users to better manage increasing workloads, perform complex data transformations, and simplify log management. Expanded Cluster Size from 5 to 7 One of the key improvements … Continue reading “New Features and Enhancements for Virtual Network Data Gateway”