Microsoft Fabric Updates Blog

Announcing New Storage Integration Support in Snowflake Connector for Fabric Data Factory

We are thrilled to announce the release of a highly anticipated feature in Fabric Data Factory: storage integration support for the Snowflake connector in data pipeline. This new capability enhances the security of your data pipelines, enabling seamless and secure integration between Snowflake and external cloud storage providers.

What is Storage Integration?

Storage integration in Snowflake allows data engineers to connect Snowflake with external storage solutions (such as Azure Blob Storage) using a secure and centralized approach. It simplifies the process of reading and writing data between Snowflake and these storage systems while maintaining strict security protocols. Previously, users had to manually configure access roles and manage keys, which could be both cumbersome and error prone. Learn more here.

What are the Key Benefits

  • Enhanced Security: The feature leverages Snowflake’s storage integration framework, ensuring access to external storage is managed through a Snowflake-assigned role, eliminating the need to expose sensitive credentials. This centralized control enhances security by governing data access and permissions through a single source.
  • Secure Authentication: Users can now implement more secure authentication methods when connecting to Azure Blob Storage, whether it’s being used as a source, destination or staging storage.
  • Fine-Grained Control: The feature also respects the PREVENT_UNLOAD_TO_INLINE_URL parameter setting, giving users the option to prevent ad-hoc data unloads to external cloud storage locations. This provides an extra layer of control for managing data flows. Learn more here.  

How to use the new feature

To use storage integration in your existing or new data pipelines, simply configure the storage integration settings directly within data pipeline in Fabric Data Factory.

  1. Create connection to your Snowflake instance in Fabric.
  2. Set up your copy activity in the data pipeline by choosing the Snowflake connection as source or destination.
  3. Specify your storage integration configured in your Snowflake instance under source or destination setting tab.

For detailed steps on how to utilize this feature and its prerequisites, you can refer to our documentation.

Looking Ahead

At Fabric Data Factory, we are committed to continually improving security, user experience and enhancing the capabilities of our data integration services. We look forward to seeing how this new capability helps you accelerate your data integration projects and achieve even greater outcomes. Stay tuned for more exciting features and updates!

โพสต์ในบล็อกที่เกี่ยวข้อง

Announcing New Storage Integration Support in Snowflake Connector for Fabric Data Factory

ตุลาคม 30, 2567 โดย Patrick LeBlanc

Welcome to the October 2024 Update! Here are a few, select highlights of the many we have for Fabric this month. API for GraphQL support for Service Principal Names (SPNs). Introducing a powerful new feature in Lakehouses: Sorting, Filtering, and Searching capabilities. An addition to KQL Queryset that will revolutionize the way you interact with … Continue reading “Fabric October 2024 Monthly Update”

ตุลาคม 29, 2567 โดย Leo Li

We’re excited to announce several powerful updates to the Virtual Network (VNET) Data Gateway, designed to further enhance performance and improve the overall user experience. These new features allow users to better manage increasing workloads, perform complex data transformations, and simplify log management. Expanded Cluster Size from 5 to 7 One of the key improvements … Continue reading “New Features and Enhancements for Virtual Network Data Gateway”