Microsoft Fabric Updates Blog

Introducing the New Feature in Lakehouse Connector in Fabric Data Factory: Schema Support for Reading and Writing Data

Fabric Lakehouse supports the creation of custom schemas. Schemas allow users to group tables together for better data discovery, access control, and more. This is now a Preview feature in Fabric. Learn more here.

We are excited to announce the latest enhancement in Fabric Data Factory that Lakehouse connector in data pipeline now supports schema. This new feature allows users to seamlessly read schema information from Fabric Lakehouse and write data directly into Lakehouse tables with schema information specified.

What Does This Feature Offer?

The Lakehouse connector now integrates with Lakehouse schema capability, offering both reading and writing functionalities that were previously limited. Users can directly retrieve schema information from the Fabric Lakehouse through data pipeline, ensuring that data structures are fully understood before any operations. Additionally, when writing data to Lakehouse tables, the connector now supports the inclusion of schema information, either writing to an existing schema or to a new schema.

How to Use the New Lakehouse Schema Support

When reading from Fabric Lakehouse table, schema information is now automatically included, offering an up-to-date view of the table structure. Similarly, when writing data, the connector will ensure that schema details are accurately applied, safeguarding the integrity of your tables.

Copy data activity in the data pipelines canvas showing the Source tab and the configuration for the Lakehouse schema and its corresponding table name

For detailed instructions on how to configure it, please refer to our documentation, or explore the feature directly through Fabric Data Factory’s user interface.

Looking Ahead

At Fabric Data Factory, we are constantly innovating to improve our data integration solutions. The introduction of schema support in the Lakehouse connector is just one of many steps we are taking to empower users with the tools they need to manage data effectively.

相關部落格文章

Introducing the New Feature in Lakehouse Connector in Fabric Data Factory: Schema Support for Reading and Writing Data

5月 22, 2025 作者: Jeroen Luitwieler

The introduction of file-based destinations for Dataflows Gen2 marks a significant step forward in enhancing collaboration, historical tracking, and sharing data for business users. This development begins with SharePoint file destinations in CSV format, offering a streamlined way to share with users on the SharePoint platform. Overview of file-based destinations File-based destinations allow data to … Continue reading “SharePoint files destination the first file-based destination for Dataflows Gen2”

5月 22, 2025 作者: Penny Zhou

Understanding complex data pipelines created by others can often be a challenging task, especially when users must review each activity individually to understand its configurations, settings, and functions. Additionally, manul updates to general settings, such as timeout and retry parameters, across multiple activities can be time-consuming and tedious. Copilot for Data pipeline introduces advanced capabilities … Continue reading “AI-powered development with Copilot for Data pipeline – Boost your productivity in understanding and updating pipeline”