Microsoft Fabric Updates Blog

Copy Data from Lakehouse in Another Workspace Using Data pipeline

In this post, we will introduce the practice of copying data between Lakehouse that cross different workspaces via Data pipeline.

In this example, we will copy data from a Lakehouse in another workspace to a Lakehouse in the current workspace by leveraging parameters to specify the workspace and Lakehouse.

Follow the steps below to achieve that:

  1. In a Data pipeline, add a copy activity into the canvas. Under the Source tab, specify a Lakehouse in another workspace as the connection.
  • In the Connection section, select Use dynamic content at the bottom of the drop-down list.
  • In the pop-up Add dynamic content pane, under the Parameters tab, select +
Screenshot showing the Add dynamic content page.
  • Specify the name for your parameter and specify a default value. A white background with black lines

Description automatically generated

Note that the parameter value should be the Lakehouse object ID. To get your Lakehouse object ID, open your Lakehouse, and the ID is after /lakehouses/ in the URL.

Screenshot showing the Lakehouse object ID.
  • Select Save to go back to the Add dynamic content pane. Then select your parameter so it appears in the expression box. Then select OK. A screenshot of a computer

Description automatically generated
  1. You’ll go back to the pipeline page and can see the parameter expression specified in the Connection section. Then specify the ID of the workspace where your Lakehouse is located.

To get your Workspace ID, open your workspace, and the ID is after /groups/ in your URL.

  1. Select Refresh after Table name and select OK in the pop-up pane since you already specified the default value for the parameter. Then select the table to copy from the drop-down list.
  1. In the Destination tab, select the Lakehouse that you want to copy data to in the drop-down list of Connection. Then specify the destination table.
  2. Select Run. In the pop-up Pipeline run pane, select OK. After the Data pipeline runs, you can see the table is copied to the destination Lakehouse.

We have a plan to introduce a native experience for the cross-workspace experience. This will allow you to select Lakehouses and other objects in another workspace directly. Please stay tuned!

Have any questions or feedback? Leave a comment below!

Related blog posts

Copy Data from Lakehouse in Another Workspace Using Data pipeline

June 21, 2024 by Marc Bushong

Developing ETLs/ELTs can be a complex process when you add in business logic, large amounts of data, and the high volume of table data that needs to be moved from source to target. This is especially true in analytical workloads involving relational data when there is a need to either fully reload a table or incrementally update a table. Traditionally this is easily completed in a flavor of SQL (or name your favorite relational database). But a question is, how can we execute a mature, dynamic, and scalable ETL/ELT utilizing T-SQL with Microsoft Fabric? The answer is with Fabric Pipelines and Data Warehouse.

June 18, 2024 by RK Iyer

✎ Co-Author – Abhishek Narain Overview Building an effective Lakehouse starts with establishing a robust ingestion layer. Ingestion refers to the process of collecting, importing, and processing raw data from various sources into the data lake. Data ingestion is fundamental to the success of a data lake as it enables the consolidation, exploration, and processing … Continue reading “Demystifying Data Ingestion in Fabric: Fundamental Components for Ingesting Data into a Fabric Lakehouse using Fabric Data Pipelines”