Microsoft Fabric Updates Blog

Understanding Operations Agent Capacity Consumption, Usage Reporting and Billing (Preview)

At Ignite, we announced operations agents that helps create autonomous agents that monitor data, infer goals, and recommend actions. Soon, we will enable billing for these agents as the Preview period continues.

Operations agents will use Fabric Capacity Units (CU) like any other Fabric features. In the Capacity Metrics App, you’ll find the following operations show four operations agents:

  • Copilot in Fabric usage is accrued when you use the LLM to configure or interact with the agent directly.
  • Operations agent compute is the cost incurred by the agent querying and monitoring the data in the background. This charge is based on the compute required by the agent to evaluate its rules and conditions—there may be additional costs incurred by any data source to serve the required data.
  • Operations agent autonomous reasoning is the LLM usage when a condition is met, and the agent summarizes and reasons over the data to produce its recommendations and messages back to the user for approval.
  • Storage is the cost of retaining Fabric items and events. Data the agent monitors is retained for 30 days and are stored within Fabric, incurring corresponding Fabric storage costs.

At each stage of the flow for creating and running an operations agent, the following usage is incurred:

Diagram of the stages of operations agent and the meters used at each stage. The text following the image describes it.
Operations agent flow and meters

When you configure the operations agent, Copilot In Fabric usage is incurred while the agent generates its playbook. You will also see usage on the Eventhouse as queries are run to identify the appropriate fields to monitor and queries to run. Storage costs are incurred to save the configuration of the agent.

When starting the agent, it runs queries and tracks rules in the background. This uses the Operations agent compute meter, and will periodically run queries against your Eventhouse, incurring charges there. Again, storage costs are incurred for the agent configuration, and the cached results of the queries.

Finally, when the monitored conditions are met in the data, the agent uses its LLM to summarize and make recommendations. This uses the Operations agent autonomous reasoning meter. If the user approves the recommended action, it’ll invoke the Power Automate flow. Typically, Power Automate is licensed per month rather than per run, but please consult Power Automate pricing for your situation.

Usage Categories

All of the operations agent’s Fabric usage is considered Background usage, even if you are directly interacting with the agent. This is because all Fabric Copilot and AI operations are considered background usage—you can read more about it in the Copilot Fabric Consumption article.

Use the Capacity Metrics App to observe types of operations, their duration, and the percentage of the capacity consumed.

Screenshot of the capacity metrics app listing the capacity operations for the operations agent.
Capacity metrics app operations

Operation Rates

Azure metric nameFabric operation nameCU rate
Operations Agents Compute Capacity Usage CUOperations agent compute0.46 CUs per vCore hour
Copilot and AI Capacity Usage CUCopilot in Fabric100 CUs per 1000 input tokens
400 CUs per 1000 output tokens
Operations Agents Autonomous Reasoning Capacity Usage CUOperations agent autonomous reasoning400 CUs per 1000 input tokens 1600 CUs per 1000 output tokens
n/aOneLake Storageper GB per hour (as per OneLake consumption)
Operations agent metrics and rates

You’ll find usage reported in the capacity metrics app in December. Billing for the Copilot in Fabric will begin in December, and billing for the remaining operations will start no earlier than January 8, 2026.

To learn more, refer to the documentation on Operations agent billing.

Bài đăng blog có liên quan

Understanding Operations Agent Capacity Consumption, Usage Reporting and Billing (Preview)

tháng 1 21, 2026 của Michal Bar

Turning questions into KQL queries just became part of Real-Time Dashboard tile editing experience, using Copilot. This new feature brings the power of AI directly into the tile editing workflow. When editing a tile, you’ll now see the Copilot assistant pane ready to help you turn natural language into actionable queries. Whether you’re new to … Continue reading “Introducing Copilot for Real-Time Dashboards: Write KQL with natural language”

tháng 1 8, 2026 của Adi Eldar

What if generating embeddings in Eventhouse didn’t require an external endpoint, callout policies, throttling management, or per‑request costs? That’s exactly what slm_embeddings_fl() delivers: a new user-defined function (UDF) that generates text embeddings using local Small Language Models (SLMs) from within the Kusto Python sandbox, returning vectors that you can immediately use for semantic search, similarity … Continue reading “Create Embeddings in Fabric Eventhouse with built-in Small Language Models (SLMs)”