Microsoft Fabric Updates Blog

Simplify data transformation and management with Copilot for Data Factory

The process of extracting, transforming, and loading (ETL) data is important for turning raw data into actionable insights. ETL allows data to be collected from various sources, cleansed and formatted into a standard structure, and then loaded into a data warehouse for analysis. This process ensures that data is accurate, consistent, and ready for business intelligence applications. However, managing each step of this process individually can be time-consuming and prone to errors.

Contoso Retailers, a fictitious company, wants to help their team to become more productive when completing this process by using Copilot in one of their Data pipelines so they can get from data ingestion to insights quickly and efficiently.

Using Copilot for Data Factory: from ingestion to insights

Contoso Retailers plans to implement the ETL process by copying data from Azure SQL Database and transforming it to gain insights. They can use Copilot for Data Factory in a Data Pipeline to achieve this. By leveraging Copilot, they plan to achieve enhanced efficiency and accuracy in their data workflows, reduce manual errors and save valuable time.

1 – Trigger a Dataflow to ingest and transform Azure SQL data with Copilot

Before you can manage your data from Azure SQL database to Microsoft Fabric, you need to create a Lakehouse to store the data and a Data pipeline to ingest the data. You can learn more about Ingesting data using data pipelines. Alternatively, you can mirror the Azure SQL Database (and other sources) which will allow you to continuously replicate your existing data estate directly into Fabric’s OneLake.

Learn more about Mirroring an Azure SQL Database.

  • Inside the Data pipeline, you can use the Copilot panel to ingest, transform data and gain a better understanding of the Data Pipeline by using the summarize feature.
  • For this scenario, the requirements are to transform their sales data from their Azure SQL database to gain better insights. You can use Copilot to copy the data from Azure SQL Database to a Lakehouse so they can begin to transform it.

Example Prompt: I want to move my data from Azure SQL to a Lakehouse

  • Here, Copilot is not just generating activities in the pipeline it also asks you follow up questions to ensure that you have the activity configuration setup to ensure the pipeline works. Make sure to provide the source and the destination connection by using forward slash (/) and searching for the respective connections on the Copilot panel so you can copy the data from Azure SQL to the Lakehouse.

Setting up source and destination connections using Copilot pane for copy pipeline activity

  • You need to provide the table names of the source and destination tables. For this requirement, the Sales table as the source and you can provide a name for a table in the Lakehouse as the destination which will prompt Copilot to create a new table if it does not already exist.

Specifying table details for Azure SQL and Lakehouse in the Copilot pane

  • Copilot enables you to use a full prompt if you know all the details of your source and destination connection with their respective tables.

Example Prompt: I want to move my data from Azure SQL with connection [your Azure SQL connection name] and a table [your Azure SQL database table] to a Lakehouse with connection [your Lakehouse connection] with a table [your new table name].

Adding the connection for a source and destination & their respective tables using a prompt in Copilot pane

  • For this requirement, you can use Copilot in the Data Pipeline to enable the team to transform the data by running a Dataflow Gen 2 after the raw data has been copied so that it adds revenue columns to the Sales table and stores it in the Lakehouse for analysis later. You will need to configure the settings of the Dataflow step to trigger the right dataflow.

    Example Prompt: When the data is successfully copied, transform the data.

Note: Copilot will add a Dataflow Activity and connect it to the on success of the Copy Data Activity.

  • Once the Dataflow activity has successfully run and all the ingested data is transformed, you need to send an email notification to Contoso Retailers. For this step, you need to manually add the email details (recipient, subject & body) in the Office 365 Outlook activity.

Example Prompt: Send an email notification when the data has been transformed.

  • You can use the ‘Run this pipeline‘ prompt to run the pipeline for Contoso Retailers to ingest/extract, transform and load their sales data.

2 – Use an existing query to clean and transform data with Copilot

Before you can clean and transform data, you need to create a Dataflow Gen 2 that will assist you in ingesting your data from a Lakehouse. You can learn more about Dataflows and how to ingest data by referring to this training.

  • Contoso Retailers want to track and monitor their sales performance and are using Copilot to enable them to get the revenue generated from their sales to better visualize them.

Example Prompt: Add a GrossRevenue column and round off to 2 decimals
Adding a gross revenue column using Copilot chat pane
Note: Notice how Copilot was able to pick up that the GrossRevenue column is a product of the UnitPrice and OrderQty then automatically filled the result as values for GrossRevenue row by row.

  • For their sales analysis, you can add a column to get the monetary value of each sales’ discount.

Example Prompt: Add a DiscountValue column and round off to 2 decimals

  • Contoso Retailers want to monitor their Net Revenue generated which allow them to make data driven decisions on which pricing strategies they can implement to maximize profitability.
  • Example Prompt: Add a NetRevenue column and round off to 2 decimals

You can achieve more with Copilot for Data Factory in a Data pipeline, it can help you troubleshoot pipeline errors by providing a clear error message summary and recommendations on how to fix it. You can also use Copilot to give you a summary of the entire Data pipeline using the Summarize this Pipeline prompt.

Resources

Overview of Copilot in Fabric

Copilot for Data Factory Overview  

Enhance data quality with Copilot for Data Factory

Learn to Use Copilot in Microsoft  

Efficiently build and maintain your Data pipelines with Copilot for Data Factory: new capabilities and experiences

Entradas de blog relacionadas

Simplify data transformation and management with Copilot for Data Factory

diciembre 9, 2025 por Kunal Parekh

Discover how Microsoft Fabric’s Forecasting Service system reduces Spark startup latency and cloud costs through proactive AI and ML-driven resource provisioning. Context & Relevance Waiting minutes for a Spark cluster to become available can throttle analytics velocity, delay insights, and drive-up cloud spend. In a world where data teams expect near‐instant execution and seamless burst … Continue reading “How does Fabric make Spark Notebooks Instant?”

noviembre 21, 2025 por Penny Zhou

Troubleshooting pipeline failures can be overwhelming, especially when a single run throws dozens or even hundreds of errors. The new Error Insights Copilot in Fabric makes this process faster, smarter, and easier. Powered by AI, Copilot provides clear explanations, root cause analysis, and actionable recommendations, so you can resolve issues without getting lost in technical … Continue reading “AI-powered troubleshooting for Fabric pipeline error messages”