Microsoft Fabric Updates Blog

Unlocking Seamless Data Integration with the Latest Fabric Data Factory Connector Innovations

In the world of modern analytics, connectivity is everything. Your data ecosystem is only as powerful as your ability to connect to diverse sources, move data securely, and keep it fresh for business insights. That’s why Microsoft Fabric has been doubling down on Data Factory connectors—expanding coverage, improving performance, and enabling new enterprise-grade integration scenarios.

Here’s a look at the latest connector innovations and how they help you tackle real-world integration challenges.

1. Expanded Connector Coverage for Any Data Landscape

Fabric Data Factory has rapidly added support for popular enterprise data sources across Dataflow Gen2, Copy job and Pipeline (Copy activity, Lookup activity, Script activity, Get metadata activity).

New Connectors for Copy job, Pipeline

  • AWS RDS for Oracle (Bring your own driver)
  • Azure Database for PostgreSQL 2.0
  • Azure Databricks Delta Lake
  • Cassandra
  • Greenplum
  • HDFS
  • Informix
  • Microsoft Access
  • Presto
  • SAP BW Open Hub
  • SAP Table
  • Teradata

New Connectors for Dataflow Gen2

  • Snowflake 2.0
  • Databricks 2.0
  • Google BigQuery 2.0
  • Impala 2.0
  • Netezza (Bring your own driver)
  • Vertica (Bring your own driver)
  • Oracle (built-in driver) – OPDG only

With these connectors, you can now bring together structured, semi-structured, and SaaS application data from more sources into Fabric without custom code or external ETL tools. This breadth helps eliminate silos and accelerates enterprise data modernization.

Learn more about the connector availability across Data Factory in the Connector overview documentation.

2. Performance Improvement for High-Speed Data Movement Connectors

Salesforce and Salesforce Service Cloud connectors now support reading data with partitions. This enables users to pull data from Salesforce tables using multiple threads for significantly improved performance. Best of all, there’s no need to manually configure partition details—the connector intelligently detects and applies the optimal partitioning strategy. This enhancement greatly simplifies the integration experience while delivering faster throughput. Available as an advanced setting in the Salesforce connector, we highly recommend leveraging this feature for long-running copy tasks that can benefit from multi-threaded reads.

Learn more details about this feature in the Partition option settings for Salesforce Connector documentation.

Partition setting in Salesforce connector  as source

3. Enterprise Readiness with Fabric Data Factory Connectors

To support mission-critical integration scenarios, we continue to enhance Fabric Data Factory connectors with features that strengthen reliability, governance, and enterprise-grade flexibility. Recent updates under the Enterprise Readiness category include:

Simplify Delta Lakehouse operations:

  • Upsert to delta table: Lakehouse connector now supports to perform upsert operation towards delta table in Copy job and Pipeline. Managing change in large data sets can be complex, especially when you need to keep tables in sync without overwriting entire datasets. With the new upsert to delta table feature, Fabric Data Factory makes it easy to merge new or updated records directly into your delta tables. Meaning you can handle incremental updates, late-arriving data, or correction scenarios seamlessly without complicated scripts or manual workarounds.
  • Learn more details about this feature from the Table action settings from Lakehouse Connector documentation.
Upsert settings in Lakehouse connector as sink
  • Delta column mapping & deletion vector: Data structures are rarely static and new fields get added, column names change, and data models evolve as business needs grow. The Lakehouse connector now helps users handle this complexity more gracefully through delta column mapping and deletion vector support. With column mapping, you can align source and destination fields even when schema changes occur, reducing the risk of pipeline failures and minimizing the need for manual adjustments. Deletion vector support ensures that records marked as deleted are handled cleanly during reads, so your downstream analytics always reflect the most accurate view of the data. Together, these capabilities empower teams to adapt to evolving data models without disruption, keeping data pipelines resilient and analytics trustworthy.
  • Learn more details about this feature from the Lakehouse connector documentation.

Handle large text data in Fabric Data Warehouse Connector

  • Modern enterprises often deal with unstructured or semi-structured data—think customer feedback, product descriptions, or logs—that don’t fit neatly into small, fixed-length fields. With support for the varchar(max) data type in the Fabric Data Warehouse connector, you can now ingest and store extremely large text values without hitting size limitations. This unlocks new integration scenarios such as bringing in detailed customer notes from CRM systems, storing machine-generated text, or capturing long-form content from applications. By supporting varchar(max), Fabric Data Factory ensures your data movement tasks can accommodate today’s data diversity, while keeping performance and compatibility optimized for enterprise-scale workloads. Support for varchar(max) means you can now handle large and complex text data with ease, opening doors for more advanced analytics scenarios.
  • Learn more details about this feature from Data warehouse connector documentation.
data type settings in DW connector

Stronger data integrity with temporal data in MongoDB & MongoDB Atlas Connector

  • Time is at the heart of many business processes whether it’s tracking user activity, monitoring IoT signals, or recording transaction histories. With new support for Timestamp and Date data types in the MongoDB and MongoDB Atlas connectors, Fabric Data Factory ensures that temporal data is preserved with full fidelity during ingestion. Instead of being flattened into strings or approximated, these values maintain their precision and semantics across your pipelines. This not only improves data integrity but also enables richer downstream analytics such as time-series analysis, trend reporting, or real-time monitoring while ensuring consistency with the way your applications natively store and interpret time-based data.
  • Learn more details about this feature from Data type mapping section from MongoDB connector documentation and MongoDB Atlas connector documentation.

Strengthen Governance in Azure Databricks Connector

  • Enterprises often organize their data across multiple catalogs in Azure Databricks Unity Catalog, separating workloads by department, project, or compliance requirements. The Azure Databricks connector in Fabric Data Factory now lets users configure which Unity Catalog to read data from, giving teams precise control over their integration pipelines. Instead of being limited to a default catalog, you can now target the exact catalog that holds the data you need—whether it’s a governed production dataset, a sandbox environment for experimentation, or a catalog reserved for sensitive, compliance-regulated data. This ensures that data movement projects stay aligned with organizational data policies, minimize cross-catalog confusion, and accelerate integration by reducing manual adjustments. By supporting catalog-level configuration, Fabric Data Factory makes it easier for enterprises to keep their data movement both flexible and secure while scaling across diverse data domains.
  • Learn more details about this feature from the settings from Azure Databricks connector documentation.

Greater alignment with DB2 environments

  • The DB2 connector now supports specifying a package collection directly within the pipeline, giving enterprises finer control over how queries are compiled and executed. In DB2, package collections are often used to organize and manage access to precompiled SQL packages across different applications or environments. By allowing this configuration as an advanced setting in copy activity, Fabric Data Factory makes it easier for data engineers to align pipelines with existing DB2 database setups—whether that means adhering to departmental standards, optimizing performance, or meeting governance requirements. This enhancement not only streamlines integration with DB2 systems but also ensures smoother collaboration between database administrators and integration teams, reducing friction and increasing reliability in enterprise-scale workloads.
  • Learn more details about this feature from the DB2 connector documentation.
connection property settings in DB2 connector

Run with the right access via Snowflake Connector

  • The Snowflake connector now allows users to specify a role directly within the pipeline, providing precise control over the privileges used during data movement. In Snowflake, roles define what objects a user can access and what actions they can perform, which is critical for enforcing security and compliance policies. By exposing this as an advanced setting in copy activity, Fabric Data Factory pipelines automatically run with the correct permissions. For instance, an enterprise may have separate roles for data analysts, finance teams, and operations teams, each with different access levels. With role-based configuration, pipelines can safely ingest data according to the appropriate access rules, reducing the risk of unauthorized access while maintaining full alignment with organizational governance. This feature empowers teams to operate securely at scale, without slowing down development or requiring manual intervention.
  • Learn more details about this feature from the Snowflake connector documentation.
Connection property settings in Snowflake connector

Accelerate Azure PostgreSQL Scenarios

  • Upsert on write for incremental data loads: The Azure Database for PostgreSQL version 2.0 connector now supports upsert operations, allowing you to efficiently merge new and updated records into your target tables. This is especially valuable for enterprise scenarios where data changes frequently, such as transactional systems, CRM updates, or IoT data streams. Instead of performing full table overwrites or building complex merge logic, it can now intelligently insert, or update records as needed. This reduces pipeline complexity, improves performance, and ensures that your PostgreSQL tables always reflect the most accurate and up-to-date data.
  • Learn more details about this feature from the Azure Database for PostgreSQL connector.
Table operation for upsert in Azure PostgreSQL connector
  • Script activity support for flexible database logic: In addition to standard copy operations, the connector now supports script activity, enabling users to execute custom SQL scripts directly within the pipeline. This provides flexibility for pre-processing, data transformations, or executing database-side logic without leaving the Fabric Data Factory environment. For example, you can run maintenance tasks, create temporary staging tables, or implement conditional logic—all as part of your automated workflow. By integrating script activity, enterprises can streamline complex data processes while keeping governance, auditing, and pipeline orchestration centralized.
  • Learn more details about this feature from the Azure Database for PostgreSQL connector documentation.

4. Strengthening Security Across Fabric Data Factory Connectors

In today’s enterprise landscape, security is not optional, instead it’s foundational. Protecting sensitive data, enforcing governance, and maintaining compliance are top priorities for every organization. That’s why Fabric Data Factory has made significant investments to enhance security across our connectors, ensuring that data integration are not only powerful and efficient but also safe, compliant, and trusted. These updates focus on modern authentication methods, encrypted communication, and seamless identity management, helping enterprises confidently scale their analytics and data operations without compromising security.

PostgreSQL supports Microsoft Entra ID authentication

For PostgreSQL, the connector now supports Microsoft Entra ID (formerly Azure AD) authentication in Dataflow Gen2, enabling seamless integration with an organization’s centralized identity system. Users can authenticate using their corporate credentials rather than database-specific usernames and passwords, improving security and streamlining access management. This ensures that access to sensitive PostgreSQL data is governed by enterprise-wide identity policies, enhancing compliance and reducing the risk of credential misuse.

Azure Database for PostgreSQL Support for TLS 1.3

Security during data transfer is critical for enterprise workloads. The connector version 2.0 now supports TLS 1.3, the latest standard for encrypted communication, providing stronger encryption, improved performance, and better protection against modern security threats. By leveraging TLS 1.3, enterprises can ensure that data in transit between Fabric Data Factory and PostgreSQL databases remains confidential, tamper-proof, and compliant with regulatory standards.

Learn more details about this feature from Azure Database for PostgreSQL connector documentation.

Workspace identity authentication

Fabric Data Factory now supports workspace-level identity authentication, allowing connectors to authenticate securely using the managed identity of the workspace. This eliminates the need to store credentials in pipelines or configuration files, reducing security risks while simplifying credential management. Enterprises benefit from centralized identity control, making it easier to enforce access policies, audit usage, and maintain compliance across multiple teams and data sources.

Looking Ahead with Fabric Data Factory Connectors

As enterprises continue to scale their data operations, the demand for secure, flexible, and high-performance integration will only grow. At Fabric Data Factory, we are committed to continuously enhancing our connectors to meet these evolving needs whether through improved data governance, support for enterprise-grade features, or stronger security and compliance measures. Looking forward, we aim to deliver even deeper integration with modern data platforms, smarter automation for incremental and other complex scenarios, and advanced security capabilities that empower organizations to confidently manage their most critical data. By staying at the forefront of enterprise readiness, Fabric Data Factory ensures that your data movement and transformation remain not only reliable and performant but also fully aligned with the demands of today’s enterprise data landscape.

Additional resources for Microsoft Fabric:

Liittyvät blogikirjoitukset

Unlocking Seamless Data Integration with the Latest Fabric Data Factory Connector Innovations

lokakuuta 30, 2025 tekijä Leo Li

Here is the October 2025 release of the on-premises data gateway (version 3000.290).

lokakuuta 29, 2025 tekijä Ye Xu

Copy job is the go-to solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between services. With native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy job offers the flexibility to handle a wide range … Continue reading “Simplifying Data Ingestion with Copy job – More File Formats with Enhancements”