A wave of new Dataflow Gen2 capabilities at FabCon Atlanta 2026
If you haven’t already, check out Arun Ulag’s hero blog “FabCon and SQLCon 2026: Unifying databases and Fabric on a single, complete platform” for a complete look at all of our FabCon and SQLCon announcements across both Fabric and our database offerings.
At FabCon Atlanta 2026, we’re sharing a wave of Dataflow Gen2 announcements designed to help every data team move faster—from self-service analysts to centralized data engineering teams. Dataflow Gen2 brings low-code data transformation to Microsoft Fabric with the power of Power Query, enabling you to ingest, shape, and land data into Fabric destinations with built-in scheduling, reuse, and governance through the Fabric workspace experience.
Over the last year, we’ve seen strong growth in Dataflow Gen2 usage and an incredible amount of feedback from customers, building everything from departmental reporting pipelines to enterprise-scale lakehouse ingestion. That momentum is translating into rapid innovation: better performance and authoring ergonomics, more powerful parameterization and portability, richer destinations, and new ways to troubleshoot and automate at scale.
The following is a walkthrough of everything we’re announcing for Dataflow Gen2 at FabCon US—split into General Available features you can rely on in production, and Preview features you can start trying now and help us shape through your feedback.
Generally Available: Production-ready enhancements for Dataflow Gen2
These capabilities are built for production workloads—improving transformation performance, portability across environments, destination coverage, and day-to-day authoring efficiency.
Modern Query Evaluator: Faster, more reliable refresh performance
The Modern Query Evaluator delivers improved performance and reliability for Power Query transformations in Dataflow Gen2, with better optimization across common shaping patterns. This helps data teams reduce end-to-end refresh times and scale transformation workloads more confidently as data volumes grow.
- Faster refreshes for multi-step shaping pipelines (joins, group-bys, type conversions, and complex expressions).
- More predictable execution when scaling a single dataflow to larger datasets or higher-frequency schedules.
Learn more: Modern Query Evaluator documentation
Preview only steps iterate efficiently without impacting refresh
Preview Only Steps let you add transformation steps that run for data preview and authoring validation but are excluded from final execution during refresh. This helps you iterate faster while keeping production refresh logic clean and efficient.
- Speed up authoring by sampling, filtering, or limiting rows during design-time without changing the production output.
- Safer experimentation when testing new steps, keep exploratory logic out of scheduled refresh.

Figure: The “Enable only in previews” option within the applied steps section.
Learn more: Preview Only Steps documentation
Fabric Variable Libraries: Parameterize once, promote everywhere
With Fabric Variable Libraries support in Dataflow Gen2, you can parameterize key values (such as environment-specific endpoints, folder paths, or destination names) and resolve them at runtime based on the workspace context. This makes it easier to promote solutions across dev/test/prod with fewer manual edits.
- Portability: use the same dataflow definition across multiple workspaces with different settings.
- Reduced configuration drift: centralize values that would otherwise be hard coded in queries or destinations.

Figure: The Filter rows dialog within Dataflow Gen2 showing the input widget and the option to Select a workspace variable.

Figure: The select variable dialog invoked from within a Dataflow Gen2.
Learn more: Fabric Workspace Variables documentation (Generally Available) and how to use variables with Dataflow Gen2
Relative references: move/copy dataflows with fewer broken connections
Relative references make it easier for Dataflow Gen2 to refer to Fabric items (such as Lakehouses, Warehouses, or SQL databases) in a way that stays valid when you move or copy solutions between workspaces. Instead of hard-coding fully qualified identifiers, you can reference items relative to the current workspace context.
- Promote solutions across environments with fewer broken connections after copy/deploy.
- Build reusable templates for standard ingestion and transformation patterns.

Figure: (Current Workspace) node in the Lakehouse connector.
Learn more: Relative references to Fabric items documentation (Generally Available)
Stay informed with email alerts for failed scheduled refreshes
Dataflow Gen2 now supports email notifications when a scheduled refresh fails, so the right people can take action quickly—without needing to constantly check the workspace or monitoring views. This reduces time-to-detect and helps keep downstream reports and pipelines running on fresh data.
- Faster response: get alerted as soon as a scheduled refresh fails, so you can fix issues before they impact business users.
- Less manual monitoring: reduce the need for teams to repeatedly check refresh history to confirm data is up to date.
- More reliable operations: keep dependent data sets, reports, and downstream pipelines healthier by catching failures earlier.
Schedule runs with parameters: Use one dataflow for multiple scenarios
You can now pass parameter values when triggering scheduled runs for Dataflow Gen2. This helps you keep a single, governed dataflow definition while varying inputs at runtime—for example, to refresh different regions, business units, or time windows on different schedules.
- Reduce duplication: avoid cloning dataflows just to hard-code different filter values or source paths.
- Standardize operations: apply consistent transformation logic while tailoring each scheduled run to a specific workload.
- Improve portability: pair with variables and relative references to promote the same solution across environments with fewer edits.
ADLS Gen2 destination (Generally Available): Land curated outputs straight into your lake
Dataflow Gen2 now supports Azure Data Lake Storage Gen2 (ADLS Gen2) as an available data destination, so you can land curated outputs directly into your data lake in open formats and folder structures aligned to your organization’s standards.
- Lake-first ingestion: for organizations standardizing on ADLS as the source of truth.
- Downstream reuse: across Fabric (Spark/SQL) and external systems that read from ADLS.
Learn more: Data destinations for Dataflow Gen2 – ADLS Gen2 (Generally Available)
Write file outputs directly to your Lakehouse with Lakehouse Files destination (Generally Available)
With Lakehouse Files as an available destination, you can write Dataflow Gen2 outputs directly into the Files area of a Fabric lakehouse. This is useful when you want file-based outputs (for example, to interoperate with existing folder conventions, or to feed downstream processes that expect files rather than tables).
- Land transformed extracts into lakehouse storage for Spark notebooks, pipelines, or external consumers.
- Enable hybrid patterns where some outputs are tables and others are files, within the same Fabric workspace.
Learn more: Data destinations for Dataflow Gen2 – Lakehouse Files (Generally Available)
Schema-aware destinations: Publish into the right schema, every time
Data destinations now support writing into specific schemas (where applicable), including Fabric SQL databases, Lakehouses, and Warehouses. This gives you more control over how tables are organized—aligning Dataflow Gen2 outputs to enterprise naming conventions, multi-team sharing models, and security/ownership boundaries.
- Organize by domain (for example, finance, sales, HR): use schemas rather than separate destinations.
- Smoother collaboration: when multiple teams publish tables into a shared warehouse or SQL database.

Figure: Connect to data destination dialog showing a Warehouse connection leveraging the navigation using full hierarchy.
Learn more: Data destinations schema support documentation
AI-Powered Transforms (Prompt): Turn intent into shaping steps
AI-Powered Transforms let you use natural language prompts to generate transformation logic, accelerating common shaping tasks and helping you discover the right Power Query patterns faster. Whether you’re cleaning columns, extracting values, or creating new derived fields, prompting can get you to a working starting point in seconds.
- Faster onboarding for new users learning Power Query expressions and best practices.
- Quicker iteration for experienced users who want to prototype transformations and then fine-tune the generated M.

Figure: AI Prompt dialog in Dataflow Gen2.
Learn more: Fabric AI Prompt in Dataflow Gen2 (Preview) – Microsoft Fabric | Microsoft Learn
Export Query Results: Validate and share shaped data from Power Query
You can now export query results directly from the Power Query authoring experience in Power BI Desktop. This makes it easier to validate transformations, share samples with teammates, and debug data issues without leaving your authoring flow.
- Accelerate troubleshooting: export a shaped dataset and compare results across steps.
- Improve collaboration: share a snapshot of outputs with business users or support teams.
Learn more: Export query results in Power Query documentation
Faster publishing: Refreshed UX and parallel validations
We’ve improved the publish experience for Dataflow Gen2 with a refreshed UX and performance enhancements, including parallelized query validations. This reduces wait time during publication, especially for dataflows with many queries and destinations.
- Less time waiting: users can publish multi-query dataflows faster.
- Clearer guidance: allows users to see validation results sooner and address issues with less back-and-forth.
Learn more: Publishing Dataflow Gen2 documentation
Save As upgrades: Carry refresh policies and automate with new API
Save As continues to get better for teams upgrading from Dataflow Gen1 to Gen2. With this release, Save As for Dataflow Gen1 now supports Scheduled Refresh Policies and we’re introducing a new API to enable automation and bulk migration scenarios—making it easier to migrate at scale with minimal to no edits needed to your dataflows.
- Streamline refresh configurations to ensure copied dataflows inherit refresh policies from the originating dataflow.
- Automate at scale with the Save As API to automate and run bulk migrations of Gen1 dataflows programmatically ideal for multi-tenant, multi-workspace rollouts.

Figure: Dialogs for the refresh and scheduling mechanism when using the Save as experience for Dataflow Gen2.
Learn more: Save As Dataflow Gen2 documentation and Save As API reference
Power Automate Dataflows connector: Orchestrate Dataflow Gen2 CI/CD items
The Dataflows connector in Power Automate now supports Dataflow Gen2 CI/CD items, enabling you to automate and orchestrate common deployment and release workflows around Dataflow Gen2 as part of broader integration and data operations processes.
- End-to-end orchestration: trigger Dataflow Gen2 refreshes from Power Automate flows —then chain downstream actions like validation, and notifications.
- Repeatable releases across environments: standardize dev/test/prod promotions with consistent, flow-driven execution to reduce manual steps and missed checks.
- Operate at scale: coordinate rollouts for many dataflows (including batched or scheduled refreshes) and add retries, branching, and exception handling in one place.
- Audit-friendly automation: capture run history, outcomes, and deployment signals in your flow so teams can troubleshoot and monitor refreshes more easily.
Preview Available: Experience the latest updates and share your feedback
These features are ready for you to try we’re ready for your feedback, and we’ll continue to iterate as we move toward the next milestone.
SharePoint Site Picker (Modern Get Data): Find the right site faster
The new SharePoint Site Picker experience in Modern Get Data helps you discover and select SharePoint sites more easily reducing the friction of connecting to the right content, especially in large tenants with many sites.
- Faster onboarding for business users who don’t know the exact site URL.
- Fewer connection errors guide users through a consistent selection flow.

Figure: SharePoint Site Picker dropdown.
Learn more about SharePoint folder connector, SharePoint list connector, and SharePoint online list.
SharePoint Site Picker (Data Destinations): Smoother landing to SharePoint
We’re introducing the same SharePoint Site Picker improvements to the Data Destinations experience, making it simpler to select SharePoint-backed filesystem destinations and land your transformed outputs exactly where you need them.
- Streamlined publishing: Publish to SharePoint destinations with less manual URL handling.
- Improved consistency: Same experience between source selection and destination selection experiences.
Recent Tables (Modern Get Data): Quickly access your most recently used tables
Recent Tables in Modern Get Data help you quickly re-connect to tables you’ve used before cutting down the time spent browsing through sources when you’re iterating on a pipeline or building multiple similar dataflows.
- Faster iteration: when refining transformations across multiple sessions
- Improved discoverability: for commonly used tables in shared or complex sources.
Advanced Edit for destinations: Unlock M-level control and parameters
The new Advanced Edit experience for Data Destinations enables editing of the underlying M logic that configures destination settings. This unlocks deeper customization, including the ability to leverage parameters to drive destination behavior—an important step for teams standardizing deployments across environments.
- Parameter-driven destinations: switch target schema/table, file paths, or naming conventions without rewriting queries.
- Unblock advanced scenarios: leverage destination settings not yet available in the simplified UI.

Figure New Advanced editor for data destinations.
Learn more: Data Destinations Advanced Edit documentation
New destination: Publish directly to Snowflake (Preview)
Dataflow Gen2 now supports Snowflake as a data destination (Preview), enabling you to publish transformed outputs directly to Snowflake databases as part of your Fabric-based low-code transformation workflows.
- Hybrid data estate: keep transformations consistent while landing curated data into Snowflake.
- Departmental enablement: empower analysts to publish standardized outputs to governed Snowflake targets.
Learn more: Snowflake data destination for Dataflow Gen2 documentation
New destination Write outputs as Excel files -filesystem destinations (Preview)
We’re introducing the ability to write outputs as Excel files (Preview) for supported filesystem destinations such as SharePoint and ADLS Gen2. This makes it easier to serve business processes that still rely on Excel while keeping transformation logic centralized and governed in Fabric.
- Operational reporting: publish refreshed Excel extracts for stakeholders or legacy workflows.
- Consistent formatting: standardize the production of Excel deliverables from a single dataflow definition.
Learn more: Excel file data destination documentation
Diagnostics download: Grab logs for cloud and VNET gateway dataflows
Dataflow Diagnostics Download (Preview) gives you a straightforward way to collect diagnostics for both cloud-based and VNET gateway-based dataflows. This helps you and support teams pinpoint issues faster when investigating refresh failures or performance bottlenecks.
- Faster root-cause analysis: download logs and artifacts for more efficient root-cause analysis.
- Better supportability: enables analysis of complex networking scenarios via VNET data gateways.

Figure: Recent runs dialog showing the new button at the bottom left of the dialog to Download detailed logs.
Learn more: Dataflow Gen2 diagnostics documentation
Publish-time destination checks: Catch issues before the first refresh
Publishing a dataflow now includes data destination validations (Preview), helping you catch common issues earlier—such as missing permissions, invalid destination settings, or naming conflicts—before the first scheduled refresh runs.
- Fail faster: debug clearer errors at publish time rather than at refresh time.
- Reduce operational load: higher productivity for administrators supporting many published dataflows.
Execute Query -Streaming API (Preview): Run queries on demand for streaming scenarios
The Execute Query API (Preview) enables on-demand execution of Power Query logic in Dataflow Gen2 scenarios—without requiring a full scheduled refresh cycle. It’s designed for cases where you need to trigger transformations programmatically (or in response to events) and retrieve results quickly for downstream processing.
- Event-driven pipelines: run a transformation when new data arrives and push outputs to a destination or consumer immediately.
- Streaming and near-real-time scenarios: execute queries more frequently than a typical scheduled refresh to support operational dashboards and alerting workflows.
- Automation at scale: integrate with orchestration tools and scripts to run specific queries as part of broader ETL/ELT jobs.
- Faster debugging: re-run targeted queries to validate fixes without republishing the entire dataflow.
Learn more: Execute Query API (Streaming) documentation
Data Factory MCP
The Data Factory MCP Server ( Preview). It exposes Dataflow Gen2 and pipeline capabilities—dataflow creation, M (Power Query) authoring, connection management, query execution, and refresh orchestration – as tools that AI assistants can call directly from VS Code, Claude, ChatGPT, Gemini etc., or the command line.
Why it matters
- AI assistants create, test, and deploy dataflows through natural language—no browser tabs or manual configuration required.
- Iterative M development via execute_query lets the AI test transforms against live data before committing to a full refresh.
- MCP Apps provide guided UI forms (connection setup, gateway selection) inside the chat panel.
- Open source (GitHub), ships as a NuGet package, runs locally—credentials never leave your machine.
Learn more: Data Factory Github Repo
Continue exploring and share your feedback
We look forward to seeing what you build with these Dataflow Gen2 improvements—whether you’re standardizing enterprise ingestion patterns, enabling self-service transformation, or scaling governed data products in Fabric. With capabilities like parameter support for scheduled runs, teams can reuse a single dataflow across many recurring scenarios—reducing maintenance overhead while keeping pipelines consistent and governed.
If you want to dive deeper, explore the following resources:
Have an idea or want to vote on what we build next? Please share feedback in the Fabric Ideas Forum (Data Factory / Dataflows area) and include details about your scenario, expected behavior, and any constraints.
Thank you to all our customers and community members who have been trying previews, filing issues, and sending feedback—your input directly shapes the roadmap and helps us make Dataflow Gen2 better for everyone.