From Lakehouse to boardroom: Analytics and AI for real insights
If you haven’t already, check out Arun Ulag’s hero blog “FabCon and SQLCon 2026: Unifying databases and Fabric on a single, complete platform” for a complete look at all of our FabCon and SQLCon announcements across both Fabric and our database offerings.
An end-to-end look at what we’re shipping in Fabric Analytics, what we are prioritizing, and real AI value.
Every enterprise I speak with wants the same thing from AI: measurable business value. That value doesn’t emerge from isolated tools or experimental notebooks. It shows up where business users already work—inside Microsoft 365, where decisions are made every day in Excel, Teams, Outlook, and Copilot.
The real challenge is in building an analytics stack that can carry data end-to-end: from ingestion and performance critical transformation, through semantic modeling, and all the way to AI experiences that operate inside business workflows without rewrites, resets, or runaway costs.
That end-to-end responsibility is what guides our work across Azure Data Analytics. Our goal is simple but demanding deliver a complete analytics platform by making data ready and usable for AI, preserving its meaning across every layer, and delivering insight fast, always with a relentless focus on price × performance.
We are also entering an era where the fastest path to insight often starts with an agent and natural language commands. If you’re the kind of developer or power analyst who wants to move faster, isn’t afraid to mix and match tools, lean on GitHub Copilot CLI, this post is for you. By the end, you’ll see how natural language and agents let you interact directly with your data so you can get answers faster than traditional workflows allow.
Customers like Bajaj Finserv, which operates a broad financial services portfolio, have adopted Fabric as its unified analytics platform and this shift has changed how analytics impacts their business.
“Moving to Microsoft Fabric helped consolidate our data foundation into one governed, observable platform. Instead of managing fragmentation, my team now focuses on building reliable patterns that scale with deeply integrated capabilities like Fabric Data Warehouse and Spark. The shift reduced costs and operational friction, and restored confidence.”
Nagaraju Gutlapalli, Head of Data Engineering, Bajaj Finserv
At FabCon, we’re shipping major updates across the entire Fabric Analytics stack—faster data processing, a stronger and more scalable SQL warehouse, a semantic layer that makes AI trustworthy, and agents that put governed answers directly inside Microsoft 365.
Contents
New developer tools: Agent Skills for Fabric
Data Engineering
Speed, scale, and the relentless pursuit of best performance at the lowest cost
If you are building an AI-ready analytics platform, everything starts with data engineering. And the single most important attribute of a data engineering platform is price × performance.
If data processing is slow or expensive, every downstream layer suffers models lag, insights stall, and AI becomes impractical at scale. That’s why Fabric Data Engineering is designed to push performance as far as possible on open formats like Delta and Parquet, without forcing code changes.
The backbone of Fabric Data Engineering is Apache Spark, significantly enriched with efforts we are often offering to the open-source community. Our Native Execution Engine has been delivering significantly lower latency for Parquet and Delta workloads under vectorized execution. But we did not stop there.
Let me share what we have done to push Fabric Spark into a category of its own.
Performance for every format
The Native Execution engine for Fabric Data Engineering provides you with a 6x performance boost over OSS Spark, with no code changes necessary. Furthermore, Z-order and Liquid Clustering optimizations are fully supported for both reads and writes.
Parallel snapshot loading dramatically reduces Delta metadata read time for tables with many files. If you have wide Delta tables with thousands of partitions, you will experience the impact immediately.

Figure 1: Price performance of Native execution engine for Fabric Data Engineering compared against OSS Spark.
Runtime and compute ergonomics
Runtime 2.0 brings Spark 4.0 and Delta Lake 4.0 to Fabric in preview. Spark 4.0 includes significant query planning improvements, and Delta Lake 4.0 introduces features like variant data types.
The new Resource Profiles capability offers simplified user experience for expressing the intent of your job and sets you up with the recommended set of Spark Configurations. We want your start up times to always be instantaneous, so we are in the process of rolling out Custom Live Pools to preview. Workspace admins can create dedicated warm compute pools with any node size and count. You express intent, and Fabric handles the configuration and startup latency.

Figure 2: The new Resource Profiles capability sets you up with the recommended set of Spark Configurations.
Materialized Lake Views (MLVs) are an exciting new Fabric capability enabling customers to build and chain together pre-computed views in their lakehouse, update them incrementally, and apply data quality constraints. MLVs are now generally available, making it easier to implement medallion architecture on Fabric and make your pipelines production-ready with broader clause support for incremental refresh, flexibility of supporting multi-schedules in a single lakehouse, in-place updates, PySpark authoring, and stronger data quality enforcement.

Figure 3: Materialized lake views make it easier to implement medallion architecture on Fabric and make your pipelines production ready.
Spark ODBC and ADO.NET drivers for Spark (Preview) with JDBC drivers reach general availability. Multiple authentication modes are supported including Azure AD, service principals, and managed identities.
AI-assisted engineering
The new and improved Copilot in Data Engineering and Data Science experience (Preview) is context-aware by default, understands your notebooks, data, and environment from the moment you start. With built-in awareness and focused context control, Copilot helps teams write, understand, debug, and optimize notebooks faster, while gaining performance insights as they build. It can reach across Fabric and reference the workspace for additional context. The VS Code experience is also improved through the Data Engineering extension.
Data Science and Machine Learning are core to modern data engineering, and we are making significant progress in this area. AutoML in Fabric (Generally Available) providing automated model selection, feature engineering, and hyperparameter tuning directly into the Fabric Data Science experience, tightly integrated with notebooks, experiments, and MLflow tracking to reduce time from data to production-ready models. We are also announcing Multimodal AI Functions (Preview), extending Fabric’s built-in AI Functions beyond text to support images and PDFs, and enabling AI powered transformations over unstructured data directly from pandas or PySpark workflows.
Data Warehouse
Enterprise architecture that scales with you
A data warehouse is where raw data becomes business-ready data. And in an AI world, the warehouse needs to do more than store and serve it to be fast enough for interactive workloads, intelligent enough to maintain itself, and flexible enough to support both human analysts and AI agents.
Predictable performance at scale
Custom SQL Pools (Preview), giving you user-defined, customizable, isolated pools of compute resources. You can create separate pools for ETL, reporting, and ad-hoc queries, with physical resource isolation so concurrent queries do not interfere with each other.
The architecture separates control flow from physical execution: a single SQL frontend handles control flow and distributed query processing, while routing queries to the appropriate pool based on your configuration. One workspace, multiple pools, complete isolation.

Figure 4: Custom SQL Pools offer predictable performance at scale.
Freshness, without operational tax
The new metadata sync for SQL analytics endpoints addresses one of the most common customer complaints: data staleness. We are delivering a 30-second SLO for data freshness. Once delta logs for a data change are available in storage, you can query it via the SQL analytics endpoint within 30 seconds, regardless of whether the endpoint was previously deactivated. This feature will roll out to preview in the next few weeks.
In addition, two more features are becoming generally available.
Proactive Statistics Refresh frontloads query optimizer statistics maintenance immediately after data changes, so your queries are not paying the cost of stale stats at execution time. Incremental Statistics Refresh updates statistics for large tables incrementally rather than re-sampling entire columns, dramatically reducing maintenance overhead for tables with billions of rows.
These are the kinds of under-the-hood optimizations that separate a warehouse that works at demo scale from one that works at enterprise scale.
AI and action where the data lives
We are adding built-in AI functions directly to T-SQL: AI_ANALYZE_SENTIMENT, AI_CLASSIFY, AI_EXTRACT, AI_GENERATE_RESPONSE, AI_SUMMARIZE, AI_TRANSLATE, and AI_FIX_GRAMMAR. SQL developers can now invoke AI capabilities without leaving the language they already know, without standing up separate services, and without moving data out of the warehouse.

Figure 5: Built-in AI functions are now directly available in T-SQL.
Additionally, you can create intelligent, configurable alerts and follow-up actions, triggered by results of your queries. If your key business metrics are out of the ordinary, send a Teams message to the right folks; if your data pipeline resulted in extreme data skew, automatically fire off an email to the ops team.

Figure 6: Create rules on SQL query results to detect data issues, monitor KPIs, and automatically trigger alerts or Fabric workflows.
Strengthening our fundamentals
The MERGE command is now GA. A single, standardized statement for INSERT, UPDATE, and DELETE operation, the workhorse of incremental data loading patterns. If you are building medallion architectures (and you should be), MERGE is the verb that moves data from Silver to Gold cleanly and efficiently.
DacFx integration in web experiences brings consistent, Git-based schema management to Fabric Warehouses. Export and import warehouse definitions such as database projects, capture and review schema changes using a single DacFx-based model, and deploy with predictable, repeatable behavior across dev/test/prod.
Even more powerful: cross-warehouse reference support enables dependency-aware development across multiple warehouses. Build Bronze, Silver, and Gold layers across warehouses without broken references — Git commits and pipelines execute in the correct order based on cross-warehouse dependencies.
We are enhancing Migration Assistant with live connectivity to source systems like Azure Synapse. No DACPAC extraction required, just connect, and the assistant fetches object metadata, translates it to Fabric DW-compatible schemas, and applies it. Start faster, reduce complexity, and reduce risk.
A new Monitoring UX provides a one-stop shop for live and completed queries with performance comparison across executions. Query Insights now exposes full query text, SQL pool names per query, and live running queries. SQL Pool Insights adds a dedicated view for understanding whether your SQL Pool is under pressure.

Figure 7: A new Monitoring UX provides a one-stop shop for live and completed queries with performance comparison across executions.
Finally, several critical enterprise features are now Generally Available – SQL Audit Logs for Fabric Data Warehouse and SQL Endpoint, Outbound Access Protection, COPY INTO and OPENROWSET, and SSMS 22.5.0 integration.
Power BI
The semantic layer that makes AI trustworthy
You probably know this already: AI is only as good as the semantic context it operates on. You can put the most advanced language model in front of raw tables, and it will sound confident, right up until it is wrong. What turns AI from a guessing machine into a reasoning system is a well‑curated semantic model that encodes business meaning: measures, relationships, hierarchies, time intelligence, and definitions that reflect how your organization works. None of this works if semantic models are an afterthought.
This is why Power BI is foundational. Because (once you strip away aspirational demos) at scale, no other BI platform combines semantic expressiveness, performance, and installed base in the way Power BI does. Customers chose Power BI because it can represent real business complexity without compromise, and because it is the fastest BI engine in the world. Our responsibility is clear: to carry your investments forward into the era of AI, without forcing rewrites, ports, or semantic resets, so the meaning you have already built continues to power both human analysis and AI reasoning.
Translytical Task Flows (Generally Available)
Translytical Task Flows (Generally Available), now enables users to take action directly from Power BI reports, add, update, or delete data, or trigger workflows in other systems, without leaving the report. This transforms Power BI from a read-only analytics surface into an operational tool. See an anomaly in a report? Fix the underlying data right there.
Report Copilot for mobile
Ask questions about your data using voice or text in the Power BI mobile app and get instant answers or visuals. A data assistant in your pocket, literally. This is Copilot meeting users where they already are, not requiring them to context-switch to a separate tool.

Figure 8: Ask questions about your data using voice or text in the Power BI mobile app and get instant answers or visuals.
TMDL View on the web
View and edit your data model’s code directly in the Power BI web interface using Tabular Model Definition Language (TMDL). This gives developers and data modelers more control, more transparency, and the ability to make precise changes without roundtripping through external tools.

Figure 9: TMDL View allows you to view and edit your data model’s code directly in the Power BI web interface.
Direct Lake over OneLake (Generally Available)
Direct Lake over native Delta tables is now Generally Available, bringing a fully streamlined path for Power BI to query OneLake data in its original Delta format without duplication or import steps. This GA release delivers near real‑time analytics by keeping semantic models directly connected to lake data, removing refresh delays and enabling faster, more efficient access to large‑scale datasets.
Table Visual: Custom Totals and Modern Defaults
Customize totals in table visuals and enjoy cleaner, more consistent default styles. These are the kinds of polish improvements that add up across an organization with thousands of reports.
AI & Data Agents
Every Office user, chatting with their data
This is where the work across the stack comes together. All the work we do in data engineering, warehousing, and semantic modeling has a single ultimate purpose: making data accessible to everyone in the organization, not just analysts and engineers.

Figure 10: Fabric Data Agents can reason over data in OneLake, support deeper analysis, and deliver insights.
Fabric Data Agents (Generally Available)
Data Agents are the last mile in the analytics pipeline. They sit on top of your semantic models and OneLake data, understand the context encoded in your Power BI measures and relationships, and expose that intelligence to all users through natural language conversations in M365 Copilot. When agents are grounded in governed semantic models, AI stops guessing, and starts reasoning with the same definitions the business already trusts.
With Data Agents in Microsoft Fabric now generally available, users can seamlessly build and interact with agents across a wide variety of data sources, including Lakehouse, Warehouse, Semantic Models, Eventhouse, SQL Databases, etc. Configuration is highly flexible, allowing you to tailor each agent’s behavior with both agent-level and data source–specific instructions, as well as custom example queries.
Sharing and publishing Data Agents within Microsoft Fabric is straightforward, enabling easy operationalization and collaboration across teams. This release also brings robust lifecycle management features to the platform, including diagnostics, Git integration, and deployment pipelines as part of Microsoft Fabric’s Application Lifecycle Management (ALM) suite. These tools empower you to troubleshoot, manage, and evolve your Data Agents with confidence, supporting a broad range of scenarios and use cases.
Building on these advancements, we are also excited to introduce the preview of several new capabilities and experiences in public preview:
Security and governance in Data Agents
Recent enhancements to Fabric Data Agents focus on strengthening security and governance. The integration with Purview enables comprehensive auditing, eDiscovery, data lifecycle management, communications compliance, and classification by capturing prompt and response telemetry and user context, ensuring enterprise-grade protection and compliance. Additionally, outbound access protection is being expanded for Data Agents, helping organizations prevent sensitive data exfiltration and meet stringent security requirements. Together, these updates offer better tools for monitoring, controlling, and safeguarding data interactions when using data agents in Fabric.
Source enhancements in Data Agents
We are expanding Fabric Data Agent’s data source capabilities with significant improvements. By introducing Graph as a data source, we allow Fabric users to model complex relationships in their data and leverage these Graphs in data agents for AI-powered insights. Additionally, support for KQL User Defined Functions (UDFs) enables richer, more optimized querying for Eventhouse and other KQL-backed sources, translating natural-language questions into efficient, secure queries. These enhancements make data agents more versatile and powerful, delivering faster analytics and broader scenario coverage for end users.

Figure 11: Fabric data agents now support Graph as a data source.
The end-to-end vision
From Lakehouse to boardroom
Step back and look at what we have built. A single data pipeline that flows like this:
Spark ingests and transforms raw data at speed: vectorized execution, instant-start pools, auto-tuned configurations, all optimized for price×performance over open Delta and Parquet formats on OneLake.
Fabric Data Warehouse then makes data enterprise-ready: workload-isolated SQL pools, 30-second freshness SLOs, AI functions built into T-SQL, and Git-based CI/CD for production-grade deployments.
Power BI adds the semantic layer: the measures, relationships, hierarchies, and business context that turn raw numbers into organizational knowledge. This is the layer that makes AI trustworthy.
Data Agents take that semantic knowledge and put it in the hands of every user through M365 Copilot: natural language, no training required, governed by Purview, secured by outbound access protection.
Every layer runs on OneLake, over open data formats. No data movement between layers. No proprietary storage. One estate, governed consistently, is accessible from any compute engine. One security model. Not merely an architectural diagram, nor a single-engine posing as many tools, this is a running production system, based on proven opensource and Microsoft-built engines, and serving thousands of organizations today.
The design principle that cuts across all of it is price × performance. Not a tradeoff between price OR performance, but the product of both. Every feature we ship is evaluated against the question: does this make customers faster AND more economical? Native Execution Engine, Resource Profiles, Custom Live Pools, Proactive Statistics, Custom SQL Pools: these are expressions of a single obsession our team has with providing value. Each layer can be adopted independently, but the economics improve materially when they are used together.
New developer tools: Agent Skills for Fabric
Last, but certainly not least: Agent Skills for Fabric in GitHub Copilot CLI
I want to close with something a bit different, something for the developers who live in the terminal, but also for power analysts who want to get to insights as quickly as possible.
We are announcing Agent Skills for Fabric in GitHub Copilot CLI, an open-source set of purpose-built plugins that allow you to use natural language to harness Microsoft Fabric, end-to-end. GitHub Copilot CLI is GitHub Copilot for your terminal: a command line tool that lets you talk to your shell in natural language and has Copilot generate, explain, and run commands or code directly from the CLI. With Agent Skills for Fabric, your natural language commands now wield the power of the Fabric engines.
You can start with something as simple as “Document my workspace” (don’t forget to mention the name!), or something more complex such as “Demo NYC Taxi Trip data is available here https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page. Create a Fabric medallion architecture project for all trips in 2019”
These are specialized skills: for Spark authoring and consumption, SQL warehouse authoring and consumption, Eventhouse authoring and consumption, Power BI semantic model interaction, and end-to-end medallion architecture orchestration, each with deep domain knowledge about Fabric patterns, best practices, and operational workflows.
This points to the future of the developer experience for data platforms. To learn more, check out the Agent Skills for Fabric GitHub repo.
Figure 12: Windows PowerShell terminal displaying a prompt ready for user input.
Get started at FabCon Atlanta
This is a pivotal moment for Microsoft Fabric and for every organization building its data and AI strategy. The announcements we are making this week represent a shift in capability across every layer of the analytics stack.
I encourage you to:
- Attend the sessions — from deep-dive workshops on Data Agents to core-note sessions on Data Engineering, Warehousing, Power BI and the future of AI in Fabric—there’s something for everyone!
- Try the features — Custom Live Pools, Custom SQL Pools, Data Agents (Generally Available) and Agent Skills for Fabric and many more are available now.
- Connect with our experts — our engineers are here and eager to hear your feedback. Come find us at the Ask the Experts booths in the expo hall.
The foundation for AI is not a model — it is your data curated, governed, and made accessible across the organization. With Fabric, we are building a single, production-ready analytics system that turns trusted data into action, whether in a notebook, a SQL query, a Power BI report, or a conversation in M365 Copilot. That’s what it takes to move from lakehouse to boardroom—and that is exactly what we are delivering!