Microsoft Fabric Updates Blog

Announcing Public Preview of T-SQL Notebook in Fabric

T-SQL Notebook public preview

Coauthored by: Priyanka Langade

T-SQL notebook is now in public preview! This update expands our language offerings, allowing T-SQL developers to use Notebooks for Data Warehouse development. You can manage complex queries and document your process with Markdown cells for better documentation. Now, with the ability to add a Data Warehouse or SQL analytics endpoint into Notebook, users can run T-SQL directly on connected warehouse or SQL analytics endpoint. BI Analysts will benefit as well by performing cross-database queries to gather insights from different warehouses and SQL analytics endpoint.

Most of the current Notebook functionalities are readily available for T-SQL notebooks. For example, T-SQL developers can utilize the extensive charting features to graphically represent their query outcomes, and work together with peers on co-authoring the notebook via collaboration, you can also schedule a regular execution of the T-SQL notebook with scheduler or trigger its execution within the Data Integration pipeline.

To get started with this experience, you can create a T-SQL notebook with the following two entries:

  • Create a T-SQL Notebook from the Data Warehouse homepage.
  • Create a T-SQL Notebook from an existing Fabric warehouse editor.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

Once the notebook is created, T-SQL is set as the default language, you can add data warehouse or SQL analytics endpoints from the current workspace into to notebook.

A screenshot of a computer

Description automatically generated

Multiple warehouses/SQL analytics endpoints can be added into the same notebook. One of them should be set as the primary warehouse. The SQL Connection string from the primary warehouse will be used to accept request of running T-SQL code, you can use the context menu item to switch Primary warehouse.

Similar to the Warehouse editor experience, the primary warehouse is always set as the first item in the Data Warehouse list.

Like the SQL editor, user can get familiar T-SQL code templates from the context menu. The first release has these options and will have more later.

  • Select Top 100:
  • Create Table
  • Create As Select
  • Drop
  • Drop and create

By clicking the “Run” button at the code cell, all the T-SQL code within the code cell will be executed under one SQL session.

A white rectangular object with blue border

Description automatically generated

If you click the “Run all”, it will run every code cell in this notebook in order. Each code cell is executed under a new SQL session.

Within the same code cell, it might contain multiple lines of code. User can select part of these code and only run the selected ones; each execution starts a new SQL session.

By expanding the message panel, you can check the summary of the execution.

A screenshot of a computer

Description automatically generated

The “Table” tab lists the records from the returned result set. If the execution contains multiple result sets, you can switch from one to another via the dropdown menu.

A screenshot of a computer

Description automatically generated

You can use “Save as table” menu to save the results of the query into the table using CTAS command. To use this menu, select the query text from the code cell and select “Save as table” menu.

Similarly, you can create a view from your selected query text using “Save as view” menu in the cell command bar.

When clicking “Inspect”, you can see the charts which represent the data quality and distribution of each column.

A screenshot of a computer

Description automatically generated

You can perform cross-warehouse queries by using three-part naming in the T-SQL code.

Within the same notebook, it’s possible to create code cells that use different languages. For instance, a PySpark code cell can precede a T-SQL code cell. In such case, user can choose to skip the run of any PySpark code for T-SQL notebook.

A screenshot of a computer

Description automatically generated

Related blog posts

Announcing Public Preview of T-SQL Notebook in Fabric

October 9, 2024 by Misha Desai

At Fabric, we’re passionate about contributing to the open-source community, particularly in areas that advance the usability and scalability of machine learning tools. One of our recent endeavors has been making substantial contributions back to the FLAML (Fast and Lightweight AutoML) project, a robust library designed to automate the tedious and complex process of machine … Continue reading “Enhancing Open Source: Fabric’s Contributions to FLAML for Scalable AutoML”

September 30, 2024 by Matt Basile

You can now delegate access to OneLake using short-lived OneLake shared access signatures (SAS). OneLake SAS make it easy to provide limited access to applications which may not support Microsoft Entra, as well as support applications serving as proxy layers between users and their data in OneLake. OneLake SAS follow security best practices for delegated … Continue reading “OneLake shared access signatures (SAS) now available in public preview”