Microsoft Fabric Updates Blog

Announcing: Automatic Log Checkpointing for Fabric Warehouse

We are excited to announce automatic log checkpointing for Data Warehouses!

One of our goals with the Data Warehouse is automate as much as possible to make it easier and cheaper for you to build and use them. This means you will be spending your time on adding and gaining insights from your data instead of spending it on tasks like maintenance. As a user, you should also expect great performance which is where log checkpointing comes in!

What is Log Checkpointing and why is it important?

To understand what log checkpointing is and why it is important, we need to first talk about how tables are stored and how they are queried.

When you create a table and add data to it, the data is stored in parquet files on OneLake. Internally, there is also a log file that keeps track of which parquet files, when combined, make up the data that is in the table. These log files are internal and cannot be used directly by other engines. Instead, we automatically publish Delta Lake Logs so that other engines can directly access the right parquet files.

Now, imagine that you load data into your table every 5 minutes. That means over the course of a year, you would have loaded data to your table 105,120 times. Each time, a new log file would be created that tells the system that when reading the table, the new parquet files need to be read as well. That means when reading the table, the system first needs to read all 105,120 log files which is not very performant.

This is where log checkpointing comes in! As of the time of this blog, after every 10 transactions, we automatically and asynchronously create a new log file that is called a checkpoint. This file is basically a summary of all the previous log files. Now when you query the table, the system needs to read the latest checkpoint and any log files that were created after. Instead of having to read 105,120 log files, we would typically need to read 10 or less files!

Conclusion

Log Checkpointing is one of the ways that we help your Data Warehouse to provide you with great performance and best of all, it involves no additional work from you! This helps give you more time to work on leveraging your Data Warehouse to gain more value and insights!

Please look forward to more announcements about more automated performance enhancements!

Zugehörige Blogbeiträge

Announcing: Automatic Log Checkpointing for Fabric Warehouse

April 17, 2025 von Jovan Popovic

The BULK INSERT statement is generally available in Fabric Data Warehouse. The BULK INSERT statement enables you to ingest parquet or csv data into a table from the specified file stored in Azure Data Lake or Azure Blob storage: The BULK INSERT statement is very similar to the COPY INTO statement and enables you to … Continue reading “BULK INSERT statement is generally available!”

April 8, 2025 von Meenal Srivastva

We are excited to announce the latest update to our permission model for OneLake events in the Fabric Real-Time Hub. Previously, users with the ReadAll permission, such as workspace admins, members, and contributors, could subscribe to OneLake events for items like lakehouses, warehouses, SQL databases, mirrored databases, and KQL databases. To provide more granular control, we … Continue reading “Announcing permission model changes for OneLake events in Fabric Real-Time Hub”