Microsoft Fabric Updates Blog

Create Metadata Driven Data Pipelines in Microsoft Fabric

Metadata-driven pipelines in Azure Data Factory and Synapse Pipelines, and now, Microsoft Fabric, give you the capability to ingest and transform data with less code, reduced maintenance and greater scalability than writing code or pipelines for every data source that needs to be ingested and transformed. The key lies in identifying the data loading and … Continue reading “Create Metadata Driven Data Pipelines in Microsoft Fabric”

Service principal support to connect to data in Dataflow, Datamart, Dataset and Dataflow Gen 2

Today I am very excited to announce that Azure service principal has been added as an authentication type for a set of data sources that can be used in Dataset, Dataflow, Dataflow Gen2 and Datamart.  Azure service principal is a security identity that is application based and can be assigned permissions to access your data … Continue reading “Service principal support to connect to data in Dataflow, Datamart, Dataset and Dataflow Gen 2”

Using Data pipelines for copying data to/from KQL Databases and crafting workflows with the Lookup activity

AUTHOR: Guy Reginiano, Program Manager In today’s data-driven landscape, the ability to capture, analyze, and visualize vast amounts of real-time data from diverse sources is crucial for making informed decisions and gaining a competitive edge. Synapse Real-Time Analytics in Microsoft Fabric offers a comprehensive solution to this challenge. Its seamless integration with Data factory in … Continue reading “Using Data pipelines for copying data to/from KQL Databases and crafting workflows with the Lookup activity”

Strong, useful, beautiful: Designing a new way of getting data

What is good design? In the data integration design team at Microsoft, we ask ourselves this question every day as we strive to create products that meet the needs and expectations of our customers. Design is not just about aesthetics or functionality, but about creating meaningful and relevant experiences for customers. Every design tells a story. It tells a story about people: what they want and what they need.

Data Pipeline Performance Improvement Part 3: Gaining more than 50% improvement for Historical Loads

Introduction / Recap Welcome to the final entry of our 3-part series on improving performance for historical data loads! In the first two entries we dove deep into the technical weeds to demonstrate the capabilities of Data Pipeline Expression Language. Part 1: Data Pipeline Performance Improvements Part 1: How to convert a time interval (dd.hh:mm:ss) … Continue reading “Data Pipeline Performance Improvement Part 3: Gaining more than 50% improvement for Historical Loads”