Evaluate Power Query Programmatically in Microsoft Fabric (Preview)
Power Query has long been at the center of data preparation across Microsoft products—from Excel and Power BI to Dataflows and Fabric. We’re introducing a major evolution: the ability to execute Power Query programmatically through a public API.
This capability turns Power Query into a programmable data transformation engine that can be invoked on demand through a REST API from notebooks, pipelines, and applications. Whether you’re orchestrating data pipelines, building custom data apps, or integrating Power Query into larger workflows, this API unlocks new flexibility and automation.
Why this matters
Power Query provides both ease of use and expressive transformation capabilities through the M language. Historically, query evaluation has been tied to dataflow refreshes or interactive tools. Programmatic query evaluation changes this model completely.
It enables:
- Automation – Trigger M transformations from pipelines or applications.
- Integration – Combine Power Query logic with Spark, SQL, and pipelines.
- Reuse – Standardize M scripts across systems and execution surfaces.
- Scale – Execute transformations on Fabric’s distributed compute engine.
- Connectivity – Access 100+ data sources through the connectors supported by Power Query Online.
- Hybrid access – Reach on-premises and private network data sources via gateway, enabling programmatic query evaluation against data that doesn’t live in the cloud.
Power Query is now a first-class compute engine within Fabric.
Execution surfaces
While Spark notebooks are a canonical example, programmatic execution is available across multiple Fabric surfaces:
Spark notebooks
Invoke Power Query and receive results as Spark or Pandas DataFrames.
REST API (Execute Query)
Trigger transformations from any HTTP client using a public, documented API.
Fabric pipelines & Notebook Jobs
Integrate Power Query steps into orchestrated workflows.
Gateway & Live Query
Evaluate Power Query scripts against on-premises or private network sources.
Quick Start Guide: Execute Query API (REST)
The Execute Query API evaluates a Power Query M script and returns results as an Apache Arrow stream.
Prerequisites
- A Dataflow Gen2 (CI/CD) artifact in your Fabric workspace. The Execute Query API operates against a dataflow, which provides the execution context.
- Connections configured in the dataflow. Query evaluations run under the scope of the connections defined in the dataflow—this determines which data sources are accessible and what credentials are used.
1. Acquire an access token
Azure CLI
az account get-access-token \
--resource https://analysis.windows.net/powerbi/api/ \
--query accessToken \
-o tsv
Fabric notebook
token = notebookutils.credentials.getToken(
"https://analysis.windows.net/powerbi/api/"
)
2. Construct the endpoint
POST https://api.fabric.microsoft.com/v1/workspaces/{workspaceId}/dataflows/{dataflowId}/executeQuery
Headers:
Authorization: Bearer <token>
Content-Type: application/json
3. Provide an M script or reference a query
{
"queryName": "MyQuery",
"customMashupDocument": "<M script here>"
}
Notes:
workspaceIdanddataflowIdare specified in the URL, not the request body.customMashupDocumentis optional. If omitted, the API executes the query named byqueryNamefrom the dataflow’s existing queries—so the query must already exist in the dataflow.
4. Parse the Arrow result stream
with pa.ipc.open_stream(response.raw) as reader:
df = reader.read_pandas()
Full Example: Calling Execute Query from a Fabric notebook
import requests
import pyarrow as pa
workspace_id = "00000000-0000-0000-0000-000000000000"
artifact_id = "11111111-1111-1111-1111-111111111111"
fabric_token = notebookutils.credentials.getToken(
"https://analysis.windows.net/powerbi/api/"
)
headers = {
"Authorization": f"Bearer {fabric_token}",
"Content-Type": "application/json",
}
url = (
f"https://api.fabric.microsoft.com/v1/"
f"workspaces/{workspace_id}/dataflows/{artifact_id}/executeQuery"
)
request_body = {
"queryName": "Monthly2020Trends",
"customMashupDocument": """
section Section1;
shared Monthly2020Trends =
let
Source = Lakehouse.Contents(null),
Navigation = Source{[workspaceId = "00000000-0000-0000-0000-000000000000"]}[Data],
LakehouseData = Navigation{[lakehouseId = "33333333-3333-3333-3333-333333333333"]}[Data],
TaxiData = LakehouseData{[Id = "green_tripdata_2020", ItemKind = "Table"]}[Data],
AddMonth =
Table.AddColumn(
TaxiData,
"Month",
each Date.Month(DateTime.Date([lpep_pickup_datetime])),
Int64.Type
),
MonthlyStats =
Table.Group(
AddMonth,
{"Month"},
{
{"TripCount", each Table.RowCount(_), Int64.Type},
{"AvgFare", each List.Average([fare_amount]), type number}
}
),
Sorted = Table.Sort(MonthlyStats, {{"Month", Order.Ascending}})
in
Sorted;
"""
}
response = requests.post(url, headers=headers, json=request_body, stream=True)
print(response.status_code)
print(response.headers)
if response.status_code != 200:
print(response.content)
else:
with pa.ipc.open_stream(response.raw) as reader:
data_frame = reader.read_pandas()
display(data_frame)
Key capabilities
- Reuse existing Power Query transformations across systems.
- Dynamically evaluate M scripts by passing custom mashup documents at runtime.
- Integrate Power Query with Spark, Python, SQL, and orchestration flows.
- Execute securely through Fabric roles, permissions, and gateway policies.
- Retrieve results as fast, columnar Arrow streams.
Limitations and considerations
- 90-second timeout – Evaluations must complete within 90 seconds.
- No actions supported – The API executes read-only queries; actions (e.g., writing data) are not supported.
- No native queries in custom mashup – Native database queries are not permitted when using
customMashupDocument. However, if a query defined in the dataflow itself uses native queries, it can be executed successfully. - Some connectors may not support headless execution.
- Pagination support may evolve.
- Performance depends on folding and M script complexity.
- Gateway-based execution requires proper configuration.
Pricing
Execute Query API usage shows up in the capacity metrics app as the operation “Dataflows Gen2 Run Query API” billed using the same meter as Dataflow Gen2 refreshes. The consumption is based on the duration of the query.
For detailed pricing information, see Dataflow Gen2 pricing and billing.
Best practices
- Store M scripts in Git alongside pipeline or notebook code.
- Test transformations independently before integrating them.
- Monitor API responses and performance.
- Document your input parameters and expected output schema.
Learn more
- Execute Query API Reference — Official REST API documentation
- Microsoft Fabric documentation
- Dataflows overview
- Power Query M language reference
- Fabric Spark compute
- Data Factory overview