Microsoft Fabric Updates Blog

Fabric Copilot Pricing: An End-to-End example 

Last month, we announced that Copilot in Fabric begins billing on March 1st, 2024, as part of your existing Power BI Premium or Fabric Capacity. Since then, we have received many questions about how to estimate the cost of using Fabric Copilot.  

In this post, we will show how Fabric Copilot usage is calculated. We’ll walk you through an example of importing, analyzing data, and creating a Power BI report using Fabric Copilot. We’ll also show the usage consumed by each step and the total cost. 

How Fabric Copilot Consumption Works 

Copilot in the Power BI, Data Factory, and Data Science/Data Engineering experiences allow you to ask questions and receive answers that are contextualized on your data. Using Fabric Copilot consumes capacity units from your existing Fabric capacity, there is no need to buy separate capacity. 

The amount of capacity used by Fabric Copilot depends on the number of inputs and outputs you sent and received. Fabric Copilot processes your inputs and outputs using tokens. A token is a unit of text that consists of one or more characters, such as a word, a punctuation mark, or a whitespace. For example, the sentence “Hello, world!” has four tokens: “Hello”, “,”, ” world”, and “!”. 

Input tokens include the text you type into the Copilot chat pane, as well as metaprompts like file paths in your Lakehouse and metadata of your tables. Fabric Copilot doesn’t process content in your tables unless it’s in your prompts. Output tokens are produced by Copilot, such as textual responses, code, or a Power BI report. 

Fabric Copilot costs 400 capacity units for every 1,000 input tokens and 1,200 capacity units for every 1,000 output tokens. The total cost of a Copilot interaction is calculated using this formula: 

Fabric Copilot consumption (CU seconds) = (input token count * 400 + output token count * 1,200)/ 1,000. 

Fabric Copilot Consumption Example 

To help you understand Fabric pricing, let’s go through an example of using Copilot for various tasks. First, we’ll ingest customer data with Copilot for Data Factory. Then, we’ll explore the data with Copilot for Data Science/Data Engineering. Finally, we’ll create a customer segmentation report with Copilot for Power BI. We’ll share the number of input and output tokens Copilot produces for each step and calculate the total cost. As we continuously improve the prompt engineering, the input and output tokens are subject to change during updates. 

Data Ingestion  

Let’s begin by using Dataflow Gen2 to ingest customer data. We will do this in the three steps below. You can refer this documentation for more detailed instructions. 

  1.  Create a new Dataflow Gen2 and get data from OData https://services.odata.org/V4/Northwind/Northwind.svc/.  
  1. ​Bring the Customers table into Power Query Editor.  
  1. Enter the prompt “Only keep European customers” in the Copilot chat pane. Your input and the response card will appear in the Copilot pane.  

Once Fabric Copilot receives your prompt, it retrieves relevant data, such as the dataset, to improve the specificity of the prompt. This ensures that you receive actionable information that is relevant to your specific task. Data retrieval is limited to data that is accessible to you based on your permissions. 

A single user input prompt can result in multiple requests being sent from Fabric Copilot to Azure OpenAI. In this example, my prompt generated 3 requests with 3,160​ input tokens and 2,149 output tokens correspondingly. ​

Request Input Tokens Output Tokens 
1,132 572 
1,397 1,527 
631 50 
Total 3,160 2,149 
Table 1 Input and output token count for data ingestion. 

When I hover over the name of my dataflow in the capacity metrics app, I can see the capacity computation under the operation named “Copilot in Fabric”. Once the results appear, I can see my Copilot usage for data ingestion, which is calculated as (3,160*400+2,149*1200) / 1000 = 3,842.8 CU seconds. If you haven’t installed the capacity metrics app, you can follow these instructions to install it.   

Figure 1 Copilot usage for data ingestion. 

Data Exploration 

Now, let’s work with customer data in my Fabric notebook. First, we’ll add the Customer data our Lakehouse as a new table by following these instructions.​ Then, we can create a new notebook and run Copilot with the prompt “Load Customers from my Lakehouse into DataFrames”. This generates 7 requests to Azure Open AI, with 4,967​ input tokens and 227 output tokens.​  

Request Input Tokens Output Tokens 
 134   –    
 273   –    
 143   –    
 15   –    
 1,409   70  
 937   88  
 2,056   69  
Total 4,967​ 227 
Table 2 Input and output tokens for load data in notebook

Next, we can run Copilot with another prompt “Analyze Customers and Suggest ways to visualize the data”. This generates 4 Azure OpenAI requests with 6,608 input tokens and ​486 output tokens.​ 

Request Input Tokens Output Tokens 
 545   26  
 2,073   73  
 954   50  
 3,036   337  
Tota6,608 486 
Table 3 Input and output tokens for data visualization suggestion

In the capacity metrics app, we can now see that the total CU seconds consumed by my notebook is 5485.6. 

Our statistics show that, on average, the input and output token count per notebook Copilot request is around 600, and the average CU seconds for one DS/DE Copilot request is around 1000 CU seconds.  

Power BI Report Generation 

Lastly, we will use Copilot to create a Power BI report. We can create a semantic model using the Customer data in the Lakehouse by following these instructions, and then create a blank new Power BI report.​  

We can run the Power BI Copilot with prompt “Create a page that shows the City Distribution of Customers”. This generates 4 LLM requests​, with 3,759 input tokens and ​839​ output tokens. Before saving the Power BI report, the usage is reported under “Power BI Session Service” in the capacity metrics app. 

We’ll save the report as “Customer Segmentation” and run Power BI Copilot with the prompt to “Summarize visuals on this page”.​ This generates 1,410 input tokens and ​220 output tokens.​ I can then see the 828 CU seconds under my report name in the capacity metrics app. 

Our statistics show that the average consumption for each Power BI Copilot request is around 1,500 CU seconds. Power BI will cache the input tokens for 2 days, and when the same prompt is re-run, no usage will be recorded. Running the same prompt on the same data also results in a much faster response, and Copilot only charges me once! 

Total Consumption 

Let’s look at the cost of each step and the total cost for this example. 

Artifact Prompt Input Tokens Output Tokens Consumption 
Dataflow Gen2  “Only keep European customers” 3,160​ 2,149​ 3,842.8​ 
Notebook  “Load Customers from my Lakehouse into DataFrames” 4,967​ 227​ 2259.2  
Notebook  “Analyze Customers and Suggest ways to visualize the data” 6,608​ 486​ 3226.4 
Power BI Report  “Create a page that shows the City Distribution of Customers” 3,759​ 839​ 2510.4​ 
Power BI Report  “Summarize visuals on this page” 1,410​ 220​ 828​ 
Total  19,904 3,921 12,666.8 
Table 4 Cost of each step and total cost .

As you can see, the total cost in this example is 12,666.8 CU seconds, which is the sum of the cost for input tokens (19,904) and the cost for output tokens (3,921).  

According to Fabric pricing table, if your capacity is in Central US, 1 CU costs $0.18 per hour for Pay-as-you-go and $0.107 per hour for reservation.  This example will cost $0.63 for Pay-as-you-go and $0.38 for reservation. You can use this example as a reference to estimate your own costs for using Fabric Copilot based on your own scenarios and usage patterns. 

Conclusion 

We hope that this blog post has helped you to better understand how the consumption rate for Fabric Copilot works and how to estimate your cost for using the Copilot feature. We are committed to providing you with the best possible experience and value with Fabric Copilot. If you have any questions or feedback, please don’t hesitate to let us know. We look forward to hearing your feedback about Copilot in Microsoft Fabric! 

Relaterte blogginnlegg

Fabric Copilot Pricing: An End-to-End example 

oktober 15, 2024 av Someleze Diko

This session is part of the Microsoft Fabric and AI Learning Hackathon which focuses on how you can leverage Copilot in Microsoft Fabric. It will guide you through the various capabilities that Copilot offers for you to use Microsoft Fabric, empowering you to enhance productivity and streamline your workflows. We will dive deep into practical … Continue reading “Microsoft Fabric and AI Learning Hackathon: Copilot in Fabric”

oktober 9, 2024 av Misha Desai

At Fabric, we’re passionate about contributing to the open-source community, particularly in areas that advance the usability and scalability of machine learning tools. One of our recent endeavors has been making substantial contributions back to the FLAML (Fast and Lightweight AutoML) project, a robust library designed to automate the tedious and complex process of machine … Continue reading “Enhancing Open Source: Fabric’s Contributions to FLAML for Scalable AutoML”