Lakehouse Schemas (Generally Available)
Schema lakehouses are now Generally Available. By using schemas in lakehouses, users can arrange their tables more efficiently and make it easier to find data. When creating new lakehouses, schema-enabled lakehouses will now be the default choice. However, users still have the option to create lakehouses without a schema if they prefer.
What do schema lakehouses offer?
At its core, a schema lakehouse lets you organize your tables much like folders. But there’s so much more:
- You can create schema shortcuts pointing to either internal sets of tables in other schemas or external folders with tables in ADLS G2 or other sources.
- With schema-enabled lakehouses, Spark allows you to reference lakehouses across multiple workspaces—even within a single query using joins.
- You can also reference non-schema lakehouses by using two-part names or their full namespace (using the implied ‘dbo’ schema), which paves the way for future migration scenarios.
- Additional features such as OneLake security RLS/CLS and Fabric Materialized Views are supported on schema-enabled lakehouses.

To use schema-enabled lakehouses in Notebooks, please ensure you have a schema-enabled lakehouse pinned, or no lakehouse pinned at all. Spark won’t be able to access schema-enabled lakehouse if you have non-schema lakehouse is pinned as default to a notebook. Soon, this configuration will become even more flexible, allowing you to set what mode all Spark runs should have in a workspace.
What’s coming next?
There are still some limitations with schema-enabled lakehouses in Spark that are currently being rolled out in the coming months:
- Spark Views support
- Shared lakehouse support
- LH stored UDF support
- External ADLS tables
- Workspace Private Links support
- Workspace Outbound Traffic Protection
All these limitations have workarounds, and you can find more information in Lakehouse schemas – Microsoft Fabric | Microsoft Learn.
What about existing non-schema lakehouses?
We continue to support non-schema lakehouses and are working towards complete feature parity between both types. Full interoperability in Spark is also supported, enabling querying and joining different types of lakehouses. Soon, we will introduce tools to help customers transition their lakehouses from non-schema to schema-enabled versions, allowing you to benefit from enhanced features without needing to move data or experience downtime.