Microsoft Fabric Updates Blog

Announcing: Automatic Data Compaction for Fabric Warehouse

We are excited to announce automatic data compaction for Data Warehouses!

One of our goals with the Data Warehouse is automate as much as possible to make it easier and cheaper for you to build and use them. This means you will be spending your time on adding and gaining insights from your data instead of spending it on tasks like maintenance. As a user, you should also expect great performance which is where Data Compaction comes in!

Why is Data Compaction important?

To understand what Data Compaction is and how it helps, we need to first talk about how Data Warehouse Tables are physically stored in OneLake.

When you create a table, it is physically stored as one or more Parquet files. Parquet files are immutable which means that they cannot be changed after they are created. When you perform DML (Data Manipulation Language), such as Inserts and Updates, each transaction will create new Parquet files. Over time, you could have 1000s of small files. Data Compaction will re-write many smaller files into a few larger files which will improve the performance of reading the table.

Another reason for Data Compaction, is to remove deleted rows from the files. When you delete a row, the row isn’t physically deleted in the parquet file. Instead, we use a Delta Lake feature called Delete Vectors which are read as part of the table and let us know which rows to ignore. Delete Vectors make it faster to perform Deletes because we do not need to re-write the existing parquet files. However, if we have many deleted rows in a parquet file, then it takes more resources to read that file and know which rows to ignore.

How does Data Compaction happen?

As you run queries in your Data Warehouse, the engine will generate system tasks to review tables that potentially could benefit from data compaction. Behind the scenes, we then evaluate those tables to see if they would indeed benefit from being compacted.

The compaction itself is actually very simple! It is basically just re-writing either the whole table or portions of the table to create new a parquet file or files that do not have any deleted rows and/or have more rows per file.

Conclusion

Data Compaction is one of the ways that we help your Data Warehouse to provide you with great performance and best of all, it involves no additional work from you! This helps give you more time to work on leveraging your Data Warehouse to gain more value and insights!

Please look forward to more announcements about more automated performance enhancements!

Gerelateerde blogberichten

Announcing: Automatic Data Compaction for Fabric Warehouse

juni 23, 2025 door Srdjan Matin

In our previous blog post Inline Scalar user-defined functions in Microsoft Fabric Warehouse (Preview) we have announced the availability of SQL native scalar UDFs. We also emphasized the importance of inlining and how that can affect scenarios in which UDF can be used. In this post, we aim to highlight common patterns that prevent inlining … Continue reading “How to make your SQL scalar user-defined function (UDF) inlineable in Microsoft Fabric Warehouse “

juni 23, 2025 door Srdjan Matin

SQL native Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse and SQL analytics endpoint are now in preview. A scalar UDF is a custom code implemented in T-SQL that accepts parameters, performs an action such as complex calculation, and returns a result of that action as a single value. They can contain local variables, calls … Continue reading “Inline Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse (Preview)”