Microsoft Fabric Updates Blog

COPY INTO support for secure storage with granular permissions

Introduction

COPY INTO statement for Fabric Data Warehouse enables high-throughput data ingestion from various external sources like ADLS Gen 2 and Blob storage accounts protected by firewall. Additionally, it supports flexible configuration options, such as specifying source file formats, handling rejected rows, and skipping header rows, which enhances data management and ensures smooth, secure operations even with firewall-protected storage.

 One of the challenges is that COPY INTO requires users to have “write” permissions in the control plane (minimum Contributor role at the workspace level), thereby requiring to grant broader permissions than needed.

The Solution:  COPY INTO support for users with “read” permissions

To align with the principle of least privilege, we are excited to announce that now a user with minimum ‘read’ permissions on the control plane but with INSERT permissions on the target table on the data plane (SQL side) can execute the COPY INTO statement.

How it works

Step 1: Provide the user with “read” permissions in the control plane. This can be achieved by sharing a warehouse with a user (with no additional permissions). Alternatively, add the user as a viewer in the workspace (viewer gets “readData” permissions by default which will provide SELECT permissions on all tables and views within the Warehouse)

Step 2: Provide granular write permissions in SQL in the data plane.

Step 3: Execute COPY INTO now that the necessary permissions now in place, you’re ready to begin importing your data. The user assigned can start ingesting data from ADLS Gen 2 account or Azure Blob storage account (public account or behind a firewall)

For detailed information on the prerequisites and considerations for using COPY INTO, please refer to our documentation.

Related blog posts

COPY INTO support for secure storage with granular permissions

April 17, 2025 by Jovan Popovic

The BULK INSERT statement is generally available in Fabric Data Warehouse. The BULK INSERT statement enables you to ingest parquet or csv data into a table from the specified file stored in Azure Data Lake or Azure Blob storage: The BULK INSERT statement is very similar to the COPY INTO statement and enables you to … Continue reading “BULK INSERT statement is generally available!”

April 8, 2025 by Meenal Srivastva

We are excited to announce the latest update to our permission model for OneLake events in the Fabric Real-Time Hub. Previously, users with the ReadAll permission, such as workspace admins, members, and contributors, could subscribe to OneLake events for items like lakehouses, warehouses, SQL databases, mirrored databases, and KQL databases. To provide more granular control, we … Continue reading “Announcing permission model changes for OneLake events in Fabric Real-Time Hub”