Microsoft Fabric Updates Blog

Unleashing the power of data for analytics applications with the new Microsoft Fabric API for GraphQL

Microsoft Fabric is an all-in-one SaaS analytics platform for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, data science, data integration, and more, all in one place. Fabric exposes different REST APIs that let you automate administrative procedures and processes, such as listing event streams, deleting OneLake shortcuts, or creating a Lakehouse. While these APIs are very useful to automate development tasks programmatically, until today there was no easy way to interact directly with your data using Fabric APIs.

We are excited to announce the public preview of the new Microsoft Fabric API for GraphQL, which allows you to create and use APIs to interact with Fabric data in a simple, flexible, and powerful way. With Fabric API for GraphQL, data engineers and scientists can create a data API to connect to different data sources in seconds, use the APIs in their workflows, or share the API endpoints with app development teams to speed up business data analytics application development. In this blog post, we will show you how to use the new GraphQL API, how it can benefit enterprise application development, and help enhance your user experience. Whether you are new to GraphQL or already familiar with it, you will find this feature useful and easy to use. Let’s get started!

What is GraphQL?

GraphQL is a query language for APIs and a runtime system that lets you specify what data you want to fetch from your backend, without worrying about how it is structured, the technology used or where it’s stored. With GraphQL, you can access data from multiple sources that can be stored on different types of databases with a single API call and get only the data you need and nothing more, optimizing bandwidth and improving performance.

A GraphQL API is defined in a Schema organized in terms of Types which contain Fields, as opposed to other types of APIs which are based on endpoints. GraphQL uses types to ensure applications only ask for what’s possible and provide clear and helpful errors, helping to avoid writing manual parsing code. Types are used to describe the data and relationships with other types, with two special types in the schema defined to interact with data:

  • Queries are used to read data.
  • Mutations are used to write/update/delete data.

The schema is based on a simple schema definition language (SDL) that describes and documents the data exposed in the API. For example, here we have a simple schema with Book and Author types where a book can have an author and an author can have many books. Brackets are used to define “many” relationships and exclamation marks are used to specify mandatory fields. The defined queries can list books, retrieve books or authors by ID, and mutations allow API clients to create, delete or update books providing full CRUDL capabilities:

type Book {
  id: ID!
  title: String!
  author: Author!
}

type Author {
  id: ID!
  name: String!
  books: [Book!]!
}

type Query {
  books: [Book!]!
  book(id: ID!): Book
  author(id: ID!): Author
}

type Mutation {
  createBook(title: String!, authorId: ID!): Book!
  updateBook(id: ID!, title: String): Book
  deleteBook(id: ID!): Book
} 

A client can send a single request to the GraphQL endpoint that fulfills a specific query to retrieve authors. All subsequent queries and mutations use the same endpoint, simplifying the client connection setup. After receiving a valid authorized request, the GraphQL engine connects to all data sources where the data is stored to retrieve specific fields defined in the query. The engine then constructs a single JSON response payload based on specific fields received from each data source and sends it back to the client, abstracting all the backend complexity and how many databases are used:

A diagram of a diagram

Description automatically generated

You can find more information about GraphQL schemas and types in the graphql.org website.

Microsoft Fabric API for GraphQL: a built-in data interface for Fabric

While GraphQL is a powerful technology to abstract backend complexities and access data from different clients and applications, manually configuring and setting up a GraphQL server can be a challenging and complex undertaking. From integration with existing systems, to planning the schema and type definitions, defining the business logic for queries and mutations, then optimizing for performance and security, there are lots of steps that take time to implement.

The new Fabric API for GraphQL allows to quicky expose data from one or multiple different Fabric data sources in seconds, automatically building a fully operational and secure GraphQL API with baked in enterprise best practices while abstracting all the API infrastructure setup and management complexity. The GraphQL schema, types, queries, mutations, and the business logic to access the data are all automatically generated based on selected data sources with built-in support for stored procedures, which can also be exposed via the API. GraphQL APIs in Fabric support the following data sources in the public preview:

  • Microsoft Fabric Data Warehouse
  • Microsoft Fabric Lakehouse via SQL Analytics Endpoint
  • Microsoft Fabric Mirrored Databases via SQL Analytics Endpoint
  • And more to come!

Getting Started

Creating a GraphQL API in Fabric to expose your data to clients is quick and easy. First create a new workspace or choose an existing one, then navigate to the Data Engineering experience in the Fabric Portal. Click on API for GraphQL (Preview) to get started:

A white square with purple and black text

Description automatically generated

Give your API a name and click Create to start the API creation process:

A screenshot of a computer

Description automatically generated

Now we need to feed our new API with data. Click on Select data source:

A screenshot of a data source

Description automatically generated

In this example we select an existing Warehouse (created with sample data as described in the documentation) then click Connect:

A screenshot of a computer

Next, we select the tables or stored procedures from the Warehouse we want to expose via GraphQL then click Load:

A screenshot of a computer

Description automatically generated

The GraphQL schema is automatically generated based on the selected tables and we can now start prototyping Queries or Mutations to read/write data in the API Editor with built-in IntelliSense/ code completion support to help us with the syntax:

A screenshot of a computer

Description automatically generated

It’s also possible to create relationships between types by selecting the ellipsis next to the data source, clicking on Manage relationships followed by New relationship:

Define the cardinality, types and fields then click on Create relationship. In this example we create a relationship between Weather and Geography where a type of weather can happen in multiple geographies:

A screenshot of a computer

Description automatically generated

We’re ready to test the relationship we created with a new query. A single query to the GraphQL endpoint fans out to two different tables (Weather and Geography) then joins the results accordingly in a familiar JSON format:

A screenshot of a computer

Description automatically generated

Once the GraphQL API is ready, we can share the endpoint with analytics development teams to integrate with their applications by clicking on Copy endpoint at the top of the Schema Explorer. In addition to the endpoint, applications require a Tenant ID and a Client ID to authorize API calls to the GraphQL endpoint. You can find more information on how to retrieve IDs from Microsoft Entra here.

A screenshot of a link

Description automatically generated

Conclusion

Now it’s easier than ever to build analytics applications on top of your data using the new Microsoft Fabric API for GraphQL. GraphQL is a powerful technology widely adopted today and expected to be used in more than 60% of enterprises in production by 2027. It provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more in a single request, makes it easier to evolve APIs over time, and enables powerful developer tools.

Fabric generates a fully-fledged scalable GraphQL API in seconds based on specific data you want to expose to enterprise applications with secure access based on managed identities. Microsoft Fabric API for GraphQL is currently in Public Preview and free to use until July 1st 2024, after which API calls (queries/mutations) will count against your Capacity Units usage. You can find more information about GraphQL in Fabric in our documentation.

How will you use the new GraphQL integration in your next Fabric project? What else would you like to see in the Microsoft Fabric API for GraphQL? Share with us in the comments section below.

Postagens relacionadas em blogs

Unleashing the power of data for analytics applications with the new Microsoft Fabric API for GraphQL

outubro 29, 2024 de Dandan Zhang

Managed private endpoints allow Fabric experiences to securely access data sources without exposing them to the public network or requiring complex network configurations. We announced General Availability for Managed Private Endpoint in Fabric in May of this year. Learn more here: Announcing General Availability of Fabric Private Links, Trusted Workspace Access, and Managed Private Endpoints. … Continue reading “APIs for Managed Private Endpoint are now available”

outubro 28, 2024 de Gali Reznick

The Data Activator team has rolled out usage reporting to help you better understand your capacity consumption and future charges. When you look at the Capacity Metrics App you’ll now see operations for the reflex items included. Our usage reporting is based on the following four meters: Rule uptime per hour: This is a flat … Continue reading “Usage reporting for Data Activator is now live”