Microsoft Fabric Updates Blog

Data Factory Spotlight: Data pipelines

Data Factory in Microsoft Fabric brings the best of Power Query and Azure Data Factory into a single, easy-to-use, modern data integration experience to empower you to solve complex data movement scenarios.

 

Today, we will be spotlighting Data pipelines in Microsoft Fabric Data Factory to share with you how you can leverage all its capabilities to ingest, transform, and orchestrate data workflows.

  

What are Data pipelines?

For those familiar with Azure Data Factory, data pipelines in Fabric are an evolution of Azure Data Factory pipelines to provide you with a rich set of orchestration integration capabilities.

 

Data pipelines provide you with:

  • Seamless connectivity to more than 100+ data stores (including cloud databases, analytical platforms, business applications, on-premises data sources*, and more).
  • Rich orchestration with 20+ activities to create robust and powerful data integration solutions.
  • Quick data copy with the Copy assistant to jumpstart your data migration projects.
  • Built-in AI enables you to accelerate and automate common data integration tasks.

*coming soon

  

What’s new in Data Factory Data pipelines?

Seamless integration with Microsoft Fabric and more

Data pipelines within Microsoft Fabric allows you to connect to a wide variety of data stores, including on-premises data sources, cloud databases, analytical platforms, business applications, and more! In addition, data pipelines are seamlessly integrated with other Fabric artifacts so that can easily connect to your Lakehouse, Fabric Data Warehouse, or execute your Data Engineering notebooks.

 

Quick and guided data copy with the Copy assistant

The Copy assistant in data pipelines allows you to jumpstart your data copying journey through a simple guided process. You can connect to wide variety of data sources and destinations and even select from sample data sources to get started even quicker.

 

Rich orchestration capabilities

Data pipelines provides you with 20+ pipeline activities to develop powerful data integration solutions, allowing you to build complex workflows to move petabytes of data, refresh and transform data with dataflows and notebooks, define control flows, and perform many other tasks at scale.

 

Pipeline templates

Data pipelines help you get started quickly by choosing from common tasks. These pre-defined pipelines help to reduce development time with building data integration projects.  

These are just a few of the exciting new capabilities you can find within Data Factory in Microsoft Fabric!

  

Get started with Microsoft Fabric now

Microsoft Fabric is currently in preview. Try out everything Fabric has to offer by signing up for the free trial—no credit card information required. Everyone who signs up gets a fixed Fabric trial capacity, which may be used for any feature or capability from integrating data to creating machine learning models. Existing Power BI Premium customers can simply turn on Fabric through the Power BI admin portal. After July 1, 2023, Fabric will be enabled by default for all Power BI tenants. 

 

Sign up for the free trial today! For more information read the Fabric trial docs.

  

Other resources

 

Have any questions or feedback? Leave a comment below!

Related blog posts

Data Factory Spotlight: Data pipelines

June 21, 2024 by Marc Bushong

Developing ETLs/ELTs can be a complex process when you add in business logic, large amounts of data, and the high volume of table data that needs to be moved from source to target. This is especially true in analytical workloads involving relational data when there is a need to either fully reload a table or incrementally update a table. Traditionally this is easily completed in a flavor of SQL (or name your favorite relational database). But a question is, how can we execute a mature, dynamic, and scalable ETL/ELT utilizing T-SQL with Microsoft Fabric? The answer is with Fabric Pipelines and Data Warehouse.

June 18, 2024 by RK Iyer

✎ Co-Author – Abhishek Narain Overview Building an effective Lakehouse starts with establishing a robust ingestion layer. Ingestion refers to the process of collecting, importing, and processing raw data from various sources into the data lake. Data ingestion is fundamental to the success of a data lake as it enables the consolidation, exploration, and processing … Continue reading “Demystifying Data Ingestion in Fabric: Fundamental Components for Ingesting Data into a Fabric Lakehouse using Fabric Data Pipelines”