Microsoft Fabric Updates Blog

Announcing: Automatic Data Compaction for Fabric Warehouse

We are excited to announce automatic data compaction for Data Warehouses!

One of our goals with the Data Warehouse is automate as much as possible to make it easier and cheaper for you to build and use them. This means you will be spending your time on adding and gaining insights from your data instead of spending it on tasks like maintenance. As a user, you should also expect great performance which is where Data Compaction comes in!

Why is Data Compaction important?

To understand what Data Compaction is and how it helps, we need to first talk about how Data Warehouse Tables are physically stored in OneLake.

When you create a table, it is physically stored as one or more Parquet files. Parquet files are immutable which means that they cannot be changed after they are created. When you perform DML (Data Manipulation Language), such as Inserts and Updates, each transaction will create new Parquet files. Over time, you could have 1000s of small files. Data Compaction will re-write many smaller files into a few larger files which will improve the performance of reading the table.

Another reason for Data Compaction, is to remove deleted rows from the files. When you delete a row, the row isn’t physically deleted in the parquet file. Instead, we use a Delta Lake feature called Delete Vectors which are read as part of the table and let us know which rows to ignore. Delete Vectors make it faster to perform Deletes because we do not need to re-write the existing parquet files. However, if we have many deleted rows in a parquet file, then it takes more resources to read that file and know which rows to ignore.

How does Data Compaction happen?

As you run queries in your Data Warehouse, the engine will generate system tasks to review tables that potentially could benefit from data compaction. Behind the scenes, we then evaluate those tables to see if they would indeed benefit from being compacted.

The compaction itself is actually very simple! It is basically just re-writing either the whole table or portions of the table to create new a parquet file or files that do not have any deleted rows and/or have more rows per file.

Conclusion

Data Compaction is one of the ways that we help your Data Warehouse to provide you with great performance and best of all, it involves no additional work from you! This helps give you more time to work on leveraging your Data Warehouse to gain more value and insights!

Please look forward to more announcements about more automated performance enhancements!

Gerelateerde blogberichten

Announcing: Automatic Data Compaction for Fabric Warehouse

oktober 4, 2024 door Jason Himmelstein

We had an incredible time in our host city of Stockholm for FabCon Europe! 3,300 attendees joined us from our international community, and it was wonderful to meet so many of you in person. Throughout the week of FabCon Europe, our teams published a wealth of valuable content, and we want to ensure you have … Continue reading “Fabric Community Conference Europe Recap”

september 26, 2024 door Jovan Popovic

We’ve improved the JSON support in Fabric Datawarehouse (Fabric DW) and have added the following features in Fabric DW: The JSON_PATH_EXIST function checks if there is a value on the given path in JSON text. The JSON_OBJECT and JSON_ARRAY functions enable you to more easily create JSON objects or arrays based on a set of … Continue reading “Announcing improved JSON support in Fabric DW”