Microsoft Fabric Updates Blog

Create Metadata Driven Data Pipelines in Microsoft Fabric

Metadata-driven pipelines in Azure Data Factory and Synapse Pipelines, and now, Microsoft Fabric, give you the capability to ingest and transform data with less code, reduced maintenance and greater scalability than writing code or pipelines for every data source that needs to be ingested and transformed. The key lies in identifying the data loading and transformation pattern(s) for your data sources and destinations and then building the framework to support each pattern.

I recently posted 2 blogs about a Metadata driven pipeline solution I created in Fabric.

A screenshot of a computer

Description automatically generated

Features include:

  • Metadata driven pipelines
  • Star schema design for Gold layer tables
  • Source data loaded into Fabric Lakehouse with Copy Data
  • Incremental loads and watermarking for large transaction tables and fact tables
  • 2 patterns for Gold layer
    • Fabric Lakehouse loaded with Copy Data activities and Spark notebooks
    • Fabric Data Warehouse loaded with Copy Data activities and SQL Stored Procedures

Why two options for the Gold layer? If you want to use T-SQL Stored Procedures for transformations, or have existing Stored Procedures to migrate to Fabric, Fabric Data Warehouse may be your best option, since it supports multi-table transactions and INSERT/UPDATE/DELETE statements. Comfortable with Spark notebooks? Then consider Fabric Lakehouse, which has the added bonus of Direct Lake connection from Power BI.

Check out the posts below to learn about building Metadata Driven Pipelines in Microsoft Fabric!

Part 1 – Metadata Driven Pipelines for Fabric with Lakehouse as Gold Layer

Part 2 – Metadata Driven Pipelines for Fabric with Data Warehouse as Gold Layer

Related blog posts

Create Metadata Driven Data Pipelines in Microsoft Fabric

October 30, 2025 by Leo Li

Here is the October 2025 release of the on-premises data gateway (version 3000.290).

October 29, 2025 by Ye Xu

Copy job is the go-to solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between services. With native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy job offers the flexibility to handle a wide range … Continue reading “Simplifying Data Ingestion with Copy job – More File Formats with Enhancements”