Microsoft Fabric Updates Blog

Fast copy in Dataflows Gen2

Dataflows help with ingesting and transforming data. With the introduction of dataflow scale-out with the SQL DW compute, we are able to transform your data at scale. However, to do this at scale, your data needs to be ingested first.

With the introduction of Fast copy, you can ingest terabytes of data with the easy experience of dataflows, but with the scalable backend of Pipeline’s Copy activity.

After enabling this capability. Dataflows will automatically switch the backend when data size exceeds 100 MB without needing to change anything during authoring of the dataflows.

After the refresh of the dataflow, you can easily check in the Refresh History experience if Fast copy was used during your run by looking at the entity status in the Refresh History.

With “Force Fast copy”, the user can start a ‘debugging’ session. When “Force Fast copy” is not used on the specific query, the dataflow refresh will be cancelled, and you do not have to wait until the refresh times out.

Using the Fast copy indicators in the Query Settings’ Steps pane, you can easily check if your query can run with Fast copy.

Data sources currently supported

Fast copy is currently only supported for the following data source connectors:

  • Azure Data Lake Storage Gen2
  • Azure Blob Storage
  • Azure SQL Database
  • Lakehouse
  • PostgreSQL

Note: On-premises data gateway and VNET data gateway are not supported yet. For Blob and Azure Data Lake Storage Gen2, only parquet and csv files are supported.

The Copy activity only supports a few transformations when connecting to a file source:

Additional transformations can be applied by splitting the ingestion and transformation steps into separate queries so that DW compute can be leveraged after your data has been ingested into OneLake.

For Azure SQL database and PostgeSQL as a source, any transformation that can fold into a native query is supported.

When directly loading the query to an output destination, the following is supported:

  • Lakehouse

If you want to use another output destination, you can stage the query first and reference the query.

Supported data types per storage location: DataflowStagingLakehouseFabric Lakehouse (LH) Output
ActionNN
AnyNN
BinaryNN
DateTimeZoneYN
DurationNN
FunctionNN
NoneNN
NullNN
TimeYY
TypeNN
Structured (List, Record, Table)NN

Prerequisites

  • Fabric capacity
  • Only .csv and .parquet files are supported.
  • 1 M rows if using Azure SQL db

How to use fast copy

Navigate to a premium workspace and create a Dataflow Gen2 using the appropriate Fabric endpoint.

Inside the Power Query editor, select the Options button and turn on Fast copy in the Scale tab.

select options from the Power Query ribbon
Enable the 'allow use of fast copy connectors' setting

Go to Get Data and select Azure Data Lake Storage Gen2 as a source and fill in the details for your container. Then use the Combine files functionality.

Use the combine files option

To ensure Fast copy can be leveraged, only apply supported transformations listed at the beginning of this article. If you want to apply other transformations, stage the data first and reference the query. Make any additional transformations on the dependent query.

Optionally, you can set the “Force fast copy” option on the query you want to test. To do so, right-click on the query and select “Require fast copy”.

Select the 'require fast copy' capability

Optionally, set Lakehouse as output destination. For any other destination, stage and reference your query first.

You can use the query folding indicators and see if the Fast copy indicators are in place so you can run your query with Fast Copy.

Check the fast copy indicators to see if fast copy can be used

Once ready, you can publish the dataflow. Once the refresh has completed, you can check if Fast copy was used in the Refresh history experience.

In the refresh history you can see if fast copy was used.

Related blog posts

Fast copy in Dataflows Gen2

April 16, 2024 by Ruixin Xu

We are pleased to share a set of key updates regarding the Copilot in Microsoft Fabric experiences. The information in this blog post has also been shared with Fabric tenant administrators. Below are the highlights of the changes. This change is an important milestone to eventually allow Copilot to reach GA within this year. See … Continue reading “Copilot in MS Fabric: Soon available to more users in your organization“

April 15, 2024 by Santhosh Kumar Ravindran

Users orchestrate their data engineering or data science processes using notebooks and in most of the enterprise scenarios pipelines and job schedulers are used as a primary option to schedule and trigger these Spark jobs. We are thrilled to announce a new feature Job Queueing for Notebook Jobs in Microsoft Fabric. This feature aims to … Continue reading “Introducing Job Queueing for Notebook in Microsoft Fabric”