Microsoft Fabric Updates Blog

Create Metadata Driven Data Pipelines in Microsoft Fabric

Metadata-driven pipelines in Azure Data Factory and Synapse Pipelines, and now, Microsoft Fabric, give you the capability to ingest and transform data with less code, reduced maintenance and greater scalability than writing code or pipelines for every data source that needs to be ingested and transformed. The key lies in identifying the data loading and transformation pattern(s) for your data sources and destinations and then building the framework to support each pattern.

I recently posted 2 blogs about a Metadata driven pipeline solution I created in Fabric.

A screenshot of a computer

Description automatically generated

Features include:

  • Metadata driven pipelines
  • Star schema design for Gold layer tables
  • Source data loaded into Fabric Lakehouse with Copy Data
  • Incremental loads and watermarking for large transaction tables and fact tables
  • 2 patterns for Gold layer
    • Fabric Lakehouse loaded with Copy Data activities and Spark notebooks
    • Fabric Data Warehouse loaded with Copy Data activities and SQL Stored Procedures

Why two options for the Gold layer? If you want to use T-SQL Stored Procedures for transformations, or have existing Stored Procedures to migrate to Fabric, Fabric Data Warehouse may be your best option, since it supports multi-table transactions and INSERT/UPDATE/DELETE statements. Comfortable with Spark notebooks? Then consider Fabric Lakehouse, which has the added bonus of Direct Lake connection from Power BI.

Check out the posts below to learn about building Metadata Driven Pipelines in Microsoft Fabric!

Part 1 – Metadata Driven Pipelines for Fabric with Lakehouse as Gold Layer

Part 2 – Metadata Driven Pipelines for Fabric with Data Warehouse as Gold Layer

Related blog posts

Create Metadata Driven Data Pipelines in Microsoft Fabric

December 4, 2025 by Connie Xu

Notebook activity in Microsoft Fabric Data Factory pipelines now supports connection property—unlocking a more secure and production-ready way to run your notebooks. What’s New? With this update, you can configure Notebook activities to run as Service Principal (SPN) or Workspace Identity (WI). These authentication methods are our recommended approach for production environments, ensuring: Why it … Continue reading “Run Notebooks in Pipelines with Service Principal or Workspace Identity”

December 1, 2025 by Ye Xu

Copy job is the recommended approach in Microsoft Fabric Data Factory for moving data from any sources to any destinations in a simplified and efficient way—whether you’re transferring data across clouds, from on-premises systems, or between services. With native support for multiple delivery patterns, including bulk copy, incremental copy, and change data capture (CDC) replication, … Continue reading “Simplifying Data Ingestion with Copy job – Replicate data from Dataverse through Fabric to multiple destinations”