Microsoft Fabric Updates Blog

SQL database in Fabric: Built for SaaS, Ready for AI

Since SQL database in Microsoft Fabric became generally available in November, customer adoption has grown rapidly. Organizations are using it to simplify their data estates, eliminate ETL pipelines, and get their operational data ready for analytics and AI—without managing infrastructure. It’s a fully managed, SaaS-native transactional database built for what comes next.

If you’ve been looking for a simpler path from transactions to insights, this is it.

Built for SaaS

SQL database in Fabric is built on the same SQL Database Engine as Azure SQL Database, so you get the T-SQL compatibility and tooling you already know. But the experience is different: you just give us a database name, and it’s ready, with autoscaling and auto pausing/resuming happening automatically. SaaS by default, PaaS configurable.

What makes it distinct is automatic replication to OneLake and autonomous capabilities. As data lands in your SQL database, it’s mirrored to OneLake in near real-time as Delta tables—no ETL, no pipelines, no extra steps. That data is immediately available for Spark notebooks, Power BI reports, cross-database queries, and machine learning workflows.

Enterprise features are built in from the start. Row-level security, customer-managed keys, private endpoints, and SQL auditing (preview) are all available. Backup customizations and retention policies give you control over data protection.

For a full breakdown of what shipped at GA, see the SQL database in Fabric GA announcement: or watch the video.

Ready for AI

Because your data lands in OneLake in an open Delta format, it’s ready for AI and machine learning scenarios without additional preparation. Data scientists can access it directly from Spark, run experiments in Fabric notebooks, build models on live operational data, and integrate with Fabric capabilities like Real-Time Intelligence, OneLake semantic model, Ontology, and data agents.

The Microsoft SQL platform also supports the native vector data type and vector indexing, enabling semantic search and retrieval-augmented generation (RAG) patterns directly in your database. Copilot is integrated into the Query Editor and the SQL analytics endpoint, so you can use natural language to explore schemas, generate queries, and get explanations of existing code or troubleshoot performance.

Developer experience

You can work with SQL database in Fabric using the tools you already have: SQL Server Management Studio 22 and the mssql extension for VS Code—both with Fabric browsing and GitHub Copilot integration—or the web-based Query Editor in the Fabric portal.

For deployment and portability, SqlPackage supports import, export, and publish operations. SQL projects integrate with Fabric source control, and support for Terraform and Fabric CLI makes automation straightforward.

Get started and stay connected

Try SQL database in Fabric today! Take advantage of the Fabric forums and ideas voting to let us know what you think.

If you want to go deeper, join us at SQLCon, co-located with FabCon Atlanta, March 16–20, 2026. It’s the first event dedicated to the SQL community within the Fabric ecosystem—expect hands-on sessions, deep dives, and direct access to the product team. Learn more and register.

For ongoing updates, subscribe to the Azure SQL YouTube channel where we publish weekly Data Exposed episodes, feature walkthroughs, and community content.

관련 블로그 게시물

SQL database in Fabric: Built for SaaS, Ready for AI

3월 12, 2026 작성자 Mehrsa Golestaneh

Agentic apps are moving quickly from prototypes to real workloads. But once you go beyond a proof of concept (POC), the hard part isn’t getting an agent to respond; it’s knowing what the agent did, whether it was safe and correct, and how it’s impacting the business. Let’s explore what it takes to operationalize agentic … Continue reading “Operationalizing Agentic Applications with Microsoft Fabric”

3월 10, 2026 작성자 Sandeep Pawar

Most enterprise data lives in free text – tickets, contracts, feedback, clinical notes, and more. It holds critical information but doesn’t fit into the structured tables that pipelines expect. Traditionally, extracting structure meant rule-based parsers that break with every format to change, or custom NLP models that take weeks to build. LLMs opened new possibilities, … Continue reading “ExtractLabel: Schema-driven unstructured data extraction with Fabric AI Functions”