Microsoft Fabric Updates Blog

Announcing: Automatic Data Compaction for Fabric Warehouse

We are excited to announce automatic data compaction for Data Warehouses!

One of our goals with the Data Warehouse is automate as much as possible to make it easier and cheaper for you to build and use them. This means you will be spending your time on adding and gaining insights from your data instead of spending it on tasks like maintenance. As a user, you should also expect great performance which is where Data Compaction comes in!

Why is Data Compaction important?

To understand what Data Compaction is and how it helps, we need to first talk about how Data Warehouse Tables are physically stored in OneLake.

When you create a table, it is physically stored as one or more Parquet files. Parquet files are immutable which means that they cannot be changed after they are created. When you perform DML (Data Manipulation Language), such as Inserts and Updates, each transaction will create new Parquet files. Over time, you could have 1000s of small files. Data Compaction will re-write many smaller files into a few larger files which will improve the performance of reading the table.

Another reason for Data Compaction, is to remove deleted rows from the files. When you delete a row, the row isn’t physically deleted in the parquet file. Instead, we use a Delta Lake feature called Delete Vectors which are read as part of the table and let us know which rows to ignore. Delete Vectors make it faster to perform Deletes because we do not need to re-write the existing parquet files. However, if we have many deleted rows in a parquet file, then it takes more resources to read that file and know which rows to ignore.

How does Data Compaction happen?

As you run queries in your Data Warehouse, the engine will generate system tasks to review tables that potentially could benefit from data compaction. Behind the scenes, we then evaluate those tables to see if they would indeed benefit from being compacted.

The compaction itself is actually very simple! It is basically just re-writing either the whole table or portions of the table to create new a parquet file or files that do not have any deleted rows and/or have more rows per file.

Conclusion

Data Compaction is one of the ways that we help your Data Warehouse to provide you with great performance and best of all, it involves no additional work from you! This helps give you more time to work on leveraging your Data Warehouse to gain more value and insights!

Please look forward to more announcements about more automated performance enhancements!

Zugehörige Blogbeiträge

Announcing: Automatic Data Compaction for Fabric Warehouse

Februar 5, 2026 von Joanna Podgoetsky

If there’s one place where the entire Microsoft Fabric ecosystem shows up in full force, this is it. FabCon Atlanta is the largest gatherings of Fabric product managers, engineers, customers, decision‑makers, and hands‑on practitioners you’ll find all year! It’s the only place where you’ll get raw, unfiltered insight into how Fabric’s Data Warehouse is evolving, … Continue reading “This is your sign to attend FabCon Atlanta—Data Warehouse Edition”

Februar 3, 2026 von Bogdan Crivat

As executives plan the next phase of their data and AI transformation, the bar for analytics infrastructure continues to rise. Enterprises are expected to support traditional business intelligence, increasingly complex analytics, and a new generation of AI-driven workloads—often on the same data, at the same time, and with far greater expectations for speed and cost … Continue reading “A turning point for enterprise data warehousing “