Microsoft Fabric Updates Blog

Introducing the New Feature in Lakehouse Connector in Fabric Data Factory: Schema Support for Reading and Writing Data

Fabric Lakehouse supports the creation of custom schemas. Schemas allow users to group tables together for better data discovery, access control, and more. This is now a Preview feature in Fabric. Learn more here.

We are excited to announce the latest enhancement in Fabric Data Factory that Lakehouse connector in data pipeline now supports schema. This new feature allows users to seamlessly read schema information from Fabric Lakehouse and write data directly into Lakehouse tables with schema information specified.

What Does This Feature Offer?

The Lakehouse connector now integrates with Lakehouse schema capability, offering both reading and writing functionalities that were previously limited. Users can directly retrieve schema information from the Fabric Lakehouse through data pipeline, ensuring that data structures are fully understood before any operations. Additionally, when writing data to Lakehouse tables, the connector now supports the inclusion of schema information, either writing to an existing schema or to a new schema.

How to Use the New Lakehouse Schema Support

When reading from Fabric Lakehouse table, schema information is now automatically included, offering an up-to-date view of the table structure. Similarly, when writing data, the connector will ensure that schema details are accurately applied, safeguarding the integrity of your tables.

Copy data activity in the data pipelines canvas showing the Source tab and the configuration for the Lakehouse schema and its corresponding table name

For detailed instructions on how to configure it, please refer to our documentation, or explore the feature directly through Fabric Data Factory’s user interface.

Looking Ahead

At Fabric Data Factory, we are constantly innovating to improve our data integration solutions. The introduction of schema support in the Lakehouse connector is just one of many steps we are taking to empower users with the tools they need to manage data effectively.

Entradas de blog relacionadas

Introducing the New Feature in Lakehouse Connector in Fabric Data Factory: Schema Support for Reading and Writing Data

abril 20, 2026 por Penny Zhou

Coordinating dbt runs with upstream ingestion and downstream consumption often requires complex solutions and different tools. You can now add a dbt job activity (Preview) directly to your Fabric pipelines. This lets you orchestrate dbt transformations alongside other pipeline activities, so you can build end-to-end data workflows without switching tools. Why this matters Run dbt … Continue reading “Orchestrate dbt jobs activity in your Fabric pipelines (Preview)”

abril 16, 2026 por Nick Salch

As the Data Integration Customer Advisory Team (CAT) lead, I spent a lot of time talking to customers at the recent FabCon/SQLCon about Fabric Data Factory, and I came away with a clear picture of what’s on customers’ minds when it comes to the future of data integration. Many of the same questions came up … Continue reading “Answers to common questions about Fabric Data Factory”