Microsoft Fabric Updates Blog

Creator Improvements in the Data Agent

We’re introducing a set of new enhancements for Data Agent creators — designed to make it easier to debug, improve, and express your agent’s logic. Whether you’re tuning example queries, refining instructions, or validating performance, these updates make it faster to iterate and deliver high-quality experiences to your users.

New Debugging Tools

View referenced example queries

You can now inspect which example queries were retrieved and applied when a user asks a question. The Run Steps view shows you exactly which examples influenced the final query — making it easy to confirm that the right examples were used or diagnose when unexpected results occur.

Screenshot of the referenced example queries in the run steps.

If you notice the wrong examples being referenced, try refining your few-shot questions or adding more targeted examples for improved accuracy.

View diagnostic details

A new Diagnostic Summary lets you download a detailed trace of the agent’s reasoning steps — beyond what’s visible in the run step view. Use it to analyze internal logic, clean up data for review, or share with the Fabric support team when reporting issues.

View diagnostics


SDK Enhancements for Example Query Validation

The Fabric Data Agent SDK now includes tools to evaluate and improve the quality of your few-shot examples. Using the new evaluate_few_shot_examples() function, you can validate each natural language/SQL pair and receive a detailed summary of which examples passed or failed.


result = evaluate_few_shot_examples( examples, 
                                     llm_client=llm_client, 
                                     model_name=model_name, 
                                     batch_size=20, 
                                     use_fabric_llm=True)

After running validation:

  • Success Cases – strong examples where the SQL matched the expected result.
  • Failure Cases – examples that didn’t align with the schema or question intent.

You can easily convert these into DataFrames for review:

success_df = cases_to_dataframe(result.success_cases)

failure_df = cases_to_dataframe(result.failure_cases)

display(success_df)
display(failure_df)

Iteratively improving weaker examples helps your Data Agent generate more accurate SQL and deliver better results for new questions.

Screenshot of example query validator results.

To explore a full working example, you can check out the Data Agent Example Queries.


Markdown Editor for Instructions

Agent and data source instructions can now be authored using Markdown, making them easier to read, structure, and maintain. This helps both creators and users clearly understand the context and intent behind your Data Agent’s behavior.

Screenshot of adding data agent level instructions to the data agent

Use Markdown to:

  • Define priorities for data sources.
  • Add structured lists, headings, or tables.
  • Clarify query logic or special handling rules.
  • Provide context on tables, columns, and relationships.

This improvement encourages better documentation practices and clearer communication between creators, users, and the Data Agent itself.

For templates and best practices, refer to the Data Agent Configurations documentation.


Multi-Tasking Flow for Easier Creation

We’ve introduced a new multi-tasking flow that allows creators to seamlessly switch between chatting with the Data Agent and configuring its settings — without losing context or progress.

This workflow was built to address key creator pain points:

  • Context switching made simple — Move fluidly between testing questions in chat and editing instructions or examples.
  • Faster iteration — Instantly test configuration changes in chat without reopening or reloading the workspace or losing context about the data source schema.

The result is a smoother, more efficient authoring experience that keeps you focused on improving your agent’s intelligence — not managing tabs or reloading configurations.


Why this matters

Together, these updates make it easier than ever to create, debug, and iterate on Data Agents. You’ll spend less time switching contexts or troubleshooting, and more time building intelligent, reliable agents that deliver accurate insights from your data.

To learn more about configuring your Data Agent, visit the documentation: Configure a Data Agent.

Postingan blog terkait

Creator Improvements in the Data Agent

Januari 21, 2026 berdasarkan Michal Bar

Turning questions into KQL queries just became part of Real-Time Dashboard tile editing experience, using Copilot. This new feature brings the power of AI directly into the tile editing workflow. When editing a tile, you’ll now see the Copilot assistant pane ready to help you turn natural language into actionable queries. Whether you’re new to … Continue reading “Introducing Copilot for Real-Time Dashboards: Write KQL with natural language”

Januari 8, 2026 berdasarkan Adi Eldar

What if generating embeddings in Eventhouse didn’t require an external endpoint, callout policies, throttling management, or per‑request costs? That’s exactly what slm_embeddings_fl() delivers: a new user-defined function (UDF) that generates text embeddings using local Small Language Models (SLMs) from within the Kusto Python sandbox, returning vectors that you can immediately use for semantic search, similarity … Continue reading “Create Embeddings in Fabric Eventhouse with built-in Small Language Models (SLMs)”