Microsoft Fabric Updates Blog

Fabric Data Factory now supports writing data in Iceberg format via Azure Data Lake Storage Gen2 Connector in Data pipeline

We’ve made a significant enhancement in Fabric Data Factory: Data pipeline can now write data in Iceberg format via the Azure Data Lake Storage (ADLS) Gen2 connector! This addition provides a powerful new option for users who need to manage and optimize large datasets with a high level of flexibility, reliability, and performance. Iceberg format support brings new efficiencies in how data is handled, transformed, and stored, enabling better performance and future scalability.

What is Iceberg Format and Why Does It Matter?

Apache Iceberg is a high-performance table format designed specifically for large analytical datasets, enabling more reliable data management and faster querying. It’s optimized for handling petabytes of data while supporting fast incremental reads, schema evolution, and ACID transactions, making it especially valuable in data engineering and analytics workflows. Iceberg is increasingly favored in big data ecosystems, enabling organizations to keep up with their ever-growing data demands while maintaining flexibility and control.

Getting Started: Writing Data in Iceberg Format via ADLS Gen2 Connector

With this new feature, Fabric Data Factory users can start writing their data in Iceberg format directly through the Azure Data Lake Storage Gen2 connector.

Here’s how it works

  1. Enable the Iceberg Format: When setting up your copy activity in data pipeline, select the option to write in Iceberg format within the ADLS Gen2 connector settings under destination section.

2. Customize and Optimize: Configure additional settings to tailor the data output to your specific needs.

3. Execute the pipeline: Once configured, your pipeline will automatically handle the Iceberg table format, allowing you to focus on higher-level tasks rather than managing the nuances of large-scale data management.

Planned expansions: read capability and additional connectors

While this release focuses on enabling write capability in Iceberg format, we are already working on expanding this functionality. Future updates will introduce read capability for Iceberg format in Fabric Data Factory, making it easier to both write to and read from Iceberg tables. Additionally, we aim to support more file-type connectors across Data Factory, further enhancing data integration flexibility and usability.

Get started with Iceberg Format today

The capability to write data in Iceberg format is available now in Fabric Data Factory’s Data pipeline via the ADLS Gen2 connector. We encourage you to explore this feature and experience the benefits of a high-performance, scalable data format designed for modern data workloads. To learn more about configuring Iceberg format in your pipelines, visit our documentation page.

We look forward to seeing how this new feature will empower your data workflows and look forward to continuing to innovate with you. Stay tuned for more updates as we expand our capabilities and bring new possibilities to Fabric Data Factory!

Fabric Data Factory Team

相關部落格文章

Fabric Data Factory now supports writing data in Iceberg format via Azure Data Lake Storage Gen2 Connector in Data pipeline

7月 10, 2025 作者: Ulrich Christ

For enterprise customers, SAP represents one of the most valuable data sources. Integration of SAP data with other non-SAP data estates is a frequent requirement of Microsoft Fabric customers. Consequently, we are continually working to expand the options for SAP data integration to ensure that every customer has a viable solution tailored to their specific … Continue reading “What’s new with SAP connectivity in Microsoft Fabric – July 2025”

7月 1, 2025 作者: Ye Xu

Copy job has been a go-to tool for simplified data ingestion in Microsoft Fabric, offering a seamless data movement experience from any source to any destination. Whether you need batch or incremental copying, it provides the flexibility to meet diverse data needs while maintaining a simple and intuitive workflow. We continuously refine Copy job based … Continue reading “Simplifying Data Ingestion with Copy job – Incremental Copy GA, Lakehouse Upserts, and New Connectors”