Microsoft Fabric Updates Blog

Using Microsoft Fabric for Generative AI: A Guide to Building and Improving RAG Systems

In the previous blog, we explained how to use Microsoft Fabric to build custom AI applications, focusing on transforming your data into valuable knowledge for Generative AI applications. Specifically, we demonstrated how to implement a Retrieval-Augmented Generation (RAG) system using Microsoft Fabric, integrating Azure OpenAI and Azure AI Search.

We are excited to share a new series of tutorials that will build on this foundation, helping you turn your data into actionable insights for Generative AI applications using Microsoft Fabric and leveraging the capabilities of Azure OpenAI and Azure AI Search.

By the end of the series, you will have a deeper understanding of how to adapt your AI solutions to your specific datasets and improving the effectiveness and relevance of your applications. You can also access and run the tutorials here.

Series Overview

This tutorial series consists of three main notebooks, each covering a crucial aspect of building and optimizing RAG systems:

1. Building a RAG System in Microsoft Fabric

The first notebook offers a detailed, step-by-step guide to constructing a RAG system within Microsoft Fabric. It will walk you through the core components, including setting up data sources and configuring AI Skill integrations, ensuring you can get your RAG system operational quickly and efficiently.

2. Evaluating RAG Performance

This notebook focuses on evaluating the performance of your RAG application by introducing key metrics such as Groundedness, Relevance, and Similarity. These metrics help you assess how well your system is retrieving and generating accurate, contextually appropriate responses. You’ll learn how to fine-tune your system based on these metrics to ensure it meets user expectations and delivers high-quality results.

3. Exploring Azure Search Types for RAG Systems

In the final notebook, we explore the various Azure AI Search types available and examine their impact on the performance of your RAG system. Understanding how to select the most appropriate search type for your use case is essential for optimizing the accuracy and efficiency of your application. This notebook will guide you through different search methods and how they can be applied to improve your system’s response generation.

In this tutorial series, we will be using a modified version of the CMU Question/Answer Dataset as a demonstration dataset. This dataset has been adjusted to comply with the different licenses associated with its subsets. Specifically, only rows from S08/S09 are included in the modified dataset, with a reference to the ExtractedPath. For simplicity, the data has been cleaned and structured into a single table, containing the following fields.

However, it’s important to note that this choice of dataset is purely for demonstration and learning purposes. You can apply the same concepts and techniques covered in these tutorials to your own datasets, making it flexible for your specific needs and use cases.

This approach will help you understand how to implement and fine-tune your RAG system for a wide variety of data sources, allowing you to tailor it to real-world applications within your organization.

To build your own RAG application, Microsoft Fabric offers two options for integrating Azure OpenAI capabilities into your RAG system:

  1. Native Integration: Fabric integrates seamlessly with Azure AI services, allowing you to use prebuilt AI models without prerequisites. This option is recommended, as you can use your Fabric authentication to access AI services, and all usage is billed against your Fabric capacity. There’s no need for subscription keys or resource IDs. This integration simplifies setup, allowing you to focus on application development rather than configuration management.
  2. Bring-Your-Own-Key: You can provision your AI services on Azure and bring your own API keys to Fabric. If certain AI services aren’t supported in the prebuilt models, this option remains available. It provides more control over your usage and allows for custom integration based on your needs.

What You’ll Gain

By working through these notebooks, you will gain hands-on experience with building, evaluating, and optimizing RAG systems in Microsoft Fabric. Each tutorial is designed to provide you with the technical insights needed to adapt these systems to your specific data and use cases, enhancing the overall performance and reliability of your AI applications.

Stay tuned for more tutorials in this series, where we will continue to dive deeper into advanced topics and configurations to help you get the most out of your Generative AI experience with Microsoft Fabric!

Post Author(s):
Amir Jafari – Senior Product Manager in Azure Data.
Alexandra Savelieva – Principal AI Engineer in Azure Data.


相關部落格文章

Using Microsoft Fabric for Generative AI: A Guide to Building and Improving RAG Systems

6月 11, 2025 作者: Eren Orbey

Earlier this year, we released AI functions in public preview, allowing Fabric customers to apply LLM-powered transformations to OneLake data simply and seamlessly, in a single line of code. Since then, we’ve continued iterating on AI functions in response to your feedback. Let’s explore the latest updates, which make AI functions more powerful, more cost-effective, … Continue reading “Introducing upgrades to AI functions for better performance—and lower costs”

5月 29, 2025 作者: Santhosh Kumar Ravindran

We’re thrilled to introduce Automated Table Statistics in Microsoft Fabric Data Engineering — a major upgrade that helps you get blazing-fast query performance with zero manual effort. Whether you’re running complex joins, large aggregations, or heavy filtering workloads, Fabric’s new automated statistics will help Spark make smarter decisions, saving you time, compute, and money. What … Continue reading “Boost performance effortlessly with Automated Table Statistics in Microsoft Fabric”