Microsoft Fabric Updates Blog

Integrating On-Premises Data into Microsoft Fabric Using Data Pipelines in Data Factory

We are thrilled to announce the public preview of on-premises connectivity for Data pipelines in Microsoft Fabric.

Using the on-premises Data Gateway, customers can connect to on-premises data sources using dataflows and data pipelines with Data Factory in Microsoft Fabric. This enhancement significantly broadens the scope of data integration capabilities. In essence, by using an on-premises Data Gateway, organizations can keep databases and other data sources on their on-premises networks while securely integrating them to Microsoft Fabric (cloud).

Thank you to all product feedback from customers and the Microsoft Fabric community who have been working with us closely to deliver this new capability!

Let’s help you get started!

Create an on-premises data gateway

  1. An on-premises data gateway is a software designed to be installed within a local network environment. It provides a means to directly install the gateway onto your local machine. For detailed instructions on how to download and install the on-premises data gateway, refer to Install an on-premises data gateway.
  2. Sign-in using your user account to access the on-premises data gateway, after which it’s prepared for utilization.

Create a connection for your on-premises data source

  1. Navigate to the admin portal and select the settings button (an icon that looks like a gear) at the top right of the page. Then choose Manage connections and gateways from the dropdown menu that appears.

2. On the New connection dialog that appears, select On-premises and then provide your gateway cluster, along with the associated resource type and relevant information.

Using on-premises data in a pipeline

  1. Go to your workspace and create a Data Pipeline.

2. Add a new source to the pipeline copy activity and select the connection established in the previous step.

3. Select a destination for your data from the on-premises source.

4. Run the pipeline.

You have now created a pipeline to load data from an on-premises data source into a cloud destination.

Resources to help you get started

Have any questions or feedback? Leave a comment below!

Bài đăng blog có liên quan

Integrating On-Premises Data into Microsoft Fabric Using Data Pipelines in Data Factory

tháng 6 24, 2024 của Justin Barry

When we talk about Microsoft Fabric workspace collaboration, a common scenario is developers and their teams using a shared workspace environment, which means they have access to “live items”. A change made directly within a workspace would override and affect all other developers or users utilizing that workspace. This is where git becomes increasingly important … Continue reading “Microsoft Fabric Lifecycle Management: Getting started with development in isolation using a Private Workspace”

tháng 6 21, 2024 của Marc Bushong

Developing ETLs/ELTs can be a complex process when you add in business logic, large amounts of data, and the high volume of table data that needs to be moved from source to target. This is especially true in analytical workloads involving relational data when there is a need to either fully reload a table or incrementally update a table. Traditionally this is easily completed in a flavor of SQL (or name your favorite relational database). But a question is, how can we execute a mature, dynamic, and scalable ETL/ELT utilizing T-SQL with Microsoft Fabric? The answer is with Fabric Pipelines and Data Warehouse.