Microsoft Fabric Updates Blog

Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)

We’re excited to announce that native support for evaluating Data Agents through the Fabric SDK is now available in Preview. You can now run structured evaluations of your agent’s responses using Python — directly from notebooks or your own automation pipelines.

Whether you’re validating accuracy before deploying to production, tuning prompts for better performance, or benchmarking improvements over time, the new APIs in the Fabric Data Agents SDK will help you test and iterate with confidence.

This blog post will walk you through how to:

  • Create a Fabric Data Agent using the SDK.
  • Connect your Lakehouse data source and select tables.
  • Define a ground truth dataset.
  • Run an evaluation and capture results.
  • Customize how answers are judged using your own prompt.
  • View detailed metrics and logs for debugging.

Prerequisites

Before running evaluations with the SDK, ensure you have a Lakehouse set up and populated with sample data. We recommend using the AdventureWorks dataset for testing purposes. You can follow the official Microsoft Fabric guide to create a Lakehouse and load the required tables. This step is essential to ensure your Data Agent has access to a structured schema for answering questions.

For setup instructions, refer to: Create a Lakehouse with AdventureWorksLH

Step 1: Install the Fabric SDK

To evaluate agents programmatically, first install the fabric-data-agent-sdk in your notebook:

%pip install -U fabric-data-agent-sdk

Step 2: Create or connect to a data agent

Use the SDK to create a new agent or connect to an existing one:

from fabric.dataagent.client import create_data_agent

data_agent_name = "ProductSalesDataAgent"
data_agent = create_data_agent(data_agent_name)

Step 3: Add Your lakehouse and select tables

Once your agent is created, add the Lakehouse as a data source and select the relevant tables:

# Add Lakehouse (optional, if not already added)
data_agent.add_datasource("AdventureWorksLH", type="lakehouse")

datasource = data_agent.get_datasources()[0]

# Select tables from dbo schema
tables = [
    "dimcustomer", "dimdate", "dimgeography", "dimproduct",
    "dimproductcategory", "dimpromotion", "dimreseller",
    "dimsalesterritory", "factinternetsales", "factresellersales"
]
for table in tables:
    datasource.select("dbo", table)

# Publish the data agent
data_agent.publish()

Step 4: Define evaluation questions and expected answers

Create a set of questions with the correct (expected) answers to evaluate how well your agent performs:

import pandas as pd

df = pd.DataFrame(columns=["question", "expected_answer"], data=[
    ["What were our total sales in 2014?", "45,694.7"],
    ["What is the most sold product?", "Mountain-200 Black, 42"],
    ["What are the most expensive items that have never been sold?", "Road-450 Red, 60"]
])

Step 5: Run the evaluation

Call evaluate_data_agent() to run the test set against the agent. You can also specify where to store the output tables and what stage of the agent to test (e.g., sandbox):

from fabric.dataagent.evaluation import evaluate_data_agent

table_name = "demo_evaluation_output"
evaluation_id = evaluate_data_agent(
    df,
    data_agent_name,
    table_name=table_name,
    data_agent_stage="sandbox"
)

print(f"Evaluation ID: {evaluation_id}")

Step 6 (Optional): Use a custom critic prompt

Want more control over how correctness is judged? Pass in your own critic_prompt with {query}, {expected_answer}, and {actual_answer} placeholders:

critic_prompt = """
Given the following query, expected answer, and actual answer, please determine if the actual answer is equivalent to expected answer. If they are equivalent, respond with 'yes'.

Query: {query}

Expected Answer:
{expected_answer}

Actual Answer:
{actual_answer}

Is the actual answer equivalent to the expected answer?
"""

evaluation_id = evaluate_data_agent(
    df,
    data_agent_name,
    critic_prompt=critic_prompt,
    table_name=table_name,
    data_agent_stage="sandbox"
)

Step 7: View summary and detailed results

Use the built-in SDK functions to retrieve the results of your evaluation:

View summary

from fabric.dataagent.evaluation import get_evaluation_summary

eval_summary_df = get_evaluation_summary(table_name)

eval_summary_df
Dataframe of the evaluation summary, showing the  number of true, false, and unclear responses.

View detailed logs

from fabric.dataagent.evaluation import get_evaluation_details

eval_details_df = get_evaluation_details(
    evaluation_id,
    table_name,
    get_all_rows=True,
    verbose=True
)
Dataframe output showing the details for a given evaluation ID. This shows the question, expected answer, judgement, and actual answer.

Try it out today

You can start evaluating your Data Agents in just a few steps:

These resources provide everything you need to build, test, and iterate on your Data Agents using the Fabric SDK.

Liittyvät blogikirjoitukset

Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)

marraskuuta 21, 2025 tekijä Penny Zhou

Troubleshooting pipeline failures can be overwhelming, especially when a single run throws dozens or even hundreds of errors. The new Error Insights Copilot in Fabric makes this process faster, smarter, and easier. Powered by AI, Copilot provides clear explanations, root cause analysis, and actionable recommendations, so you can resolve issues without getting lost in technical … Continue reading “AI-powered troubleshooting for Fabric pipeline error messages”

marraskuuta 21, 2025 tekijä Connie Xu

Say goodbye to complex syntax—Copilot now makes building pipeline expressions as easy as having a conversation. Introducing a new Copilot capability (Preview) that transforms how you create and understand pipeline expressions in Fabric Data Factory. Why This Matters Pipeline expressions are powerful tools for dynamic data movement and transformation. But writing them often means memorizing … Continue reading “Natural Language to Generate and Explain Pipeline Expressions with Copilot (Preview)”