Microsoft Fabric Updates Blog

Bringing together Fabric Real-time Intelligence, Notebook and Spark Structured Streaming (Preview)

Coauthored by QiXiao Wang

Building event-driven, real-time applications using Fabric Eventstreams and Spark Notebooks just got a whole lot easier. With the Preview of Spark Notebooks and Real-Time Intelligence integration — a new capability that brings together the open-source community supported richness of Spark Structured Streaming with the real-time stream processing power of Fabric Eventstreams — developers can now build low-latency, end-to-end real-time analytics and AI pipelines all within Microsoft Fabric.

You can now seamlessly access streaming data from Eventstreams directly inside Spark notebooks, enabling real-time insights and decision-making without the complexity & tediousness of manual coding and configuration.

Why should you care?

Real-time data is at the heart of modern analytics and AI. If you have ever struggled with stitching together streaming sources, managing secrets, writing and debugging streaming logic, this release changes the game. We have simplified the experience so you can focus on building solutions, not managing boiler-plate code and infrastructure.

Here’s what you can do with these new capabilities:

Discover real-time sources instantly

Explore Eventstreams and other real-time sources through the Real-Time hub, right from within your Fabric notebooks. No more searching for connection details; everything you need is at your fingertips. You can also create new Eventstreams and start ingesting data from nearly 30 (and growing) streaming sources including CDC-enabled databases, message brokers, streaming services and public feeds.

Example scenario:

Building a fraud detection pipeline? Quickly locate the Eventstream carrying the latest transaction data and start processing it using Spark Structured Streaming without ever leaving the Fabric Notebook experience.

Screenshot of a Fabric Notebook showing how to use the Explorer to add an Eventstream from the Real-time Hub. The left panel displays the "Explorer" section. This is initially empty with a file icon labeled "No data sources added" and a green button "Add data items," The main area contains a code editor with a welcome comment.
Real-Time Hub view inside Fabric Notebook — discover Eventstreams in seconds

Connect and start processing in minutes

Kickstart your streaming workflows with auto-generated PySpark code snippets. Whether you’re ingesting data or applying transformations, these snippets help you go from zero to streaming in record time. Just click on streams in the Explorer and choose to “Read with Spark”. This automatically generates a PySpark code snippet that has all the boiler plate code to read from the source Eventstream and write the results to “console”. You can now start to add complex business logic and debug using familiar Python/SQL.

Example scenario:

Need to enrich IoT sensor data with historical context for predictive maintenance? Connect to Eventstream and data in your Lakehouse, and start processing within minutes using secure, auto-generated PySpark code.

Animated screenshot of a Fabric Spark Notebook showing a code editor with a welcome message and a sidebar containing a Eventstream selected in the previous step. The interface includes tabs for Home, Edit, Tools, Run, and various options for managing environments and data connections, highlighting a setup for coding and data analysis. Finally, it shows the steps needed to have the Notebook automatically generate code to connect to the selected Eventstream.
Auto-generated PySpark snippet in Fabric Notebook — your streaming pipeline starts here

Reuse existing Notebooks

If your team already has Notebooks built for prototyping or testing, you can now bring them directly into your Eventstream as operational streaming processors. This lets you extend the life of existing assets, reduce duplication, and accelerate development by reusing logic that already works. With seamless notebook loading, you can evolve existing workflows into full production‑grade streaming pipelines with minimal refactoring.

Example scenario:

Your data science team has already built Notebooks for real-time anomaly detection. Use them directly from an Eventstream, adding advanced ML models for deeper insights.

Animated screenshot of a Fabric Eventstream showing interface showing how to select and load an existing Notebook to process a stream containing synthetic Stock market data. A user would add a Spark Notebook as a destination. This opens up a right side panel with drop downs to select a Fabric workspace and a Notebook within. There are options to review & validate the parameters. Once completed, the user can save their configuration and publish the changes.
Load a Spark Notebook as an Eventstream destination — reuse and collaborate

Secure, seamless connectivity

Forget connection strings and secrets in PySpark code. The enhanced Fabric-optimized, Apache Kafka-based Spark adapter for Eventstreams ensures secure, frictionless connectivity between Fabric Spark jobs and any Eventstream — so your data stays protected while your pipelines run fast. Just specify Eventstream ID and the default/derived stream datasource id and enhanced Kafka adapter takes care of the rest. It validates (using EntraID) and uses the logged in Notebook users token to authorize access to the Eventstream, retrieves the connection details and establishes a secure connection to the Eventstream. This removes a major operational burden while ensuring your pipelines stay fast, reliable, and secure by default.

Example scenario:

Working with sensitive financial data? Built-in security including no-secrets in code means compliance without extra effort.

from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from pyspark.sql.types import StringType
from pyspark.sql.dataframe import DataFrame

eventstream_options = {
   “eventstream.itemid”: ‘<ENTER ITEMID FOR YOUR EVENTSTREAM>’,
   “eventstream.datasourceid”: ‘<ENTER DATASOURCEID FOR THE NOTEBOOK DESTINATION’}

# Read from Kafka using the config map
df_raw = spark.readStream.format(“kafka”).options(**eventstream_options).load()

decoded_df = df_raw.select(
   col(“key”).cast(StringType()).alias(“key”),
   col(“value”).cast(StringType()).alias(“value”),
   col(“partition”),
   col(“offset”)
)

def showDf(x:DataFrame, y:int):
   x.show()

# Print messages to the console
query = decoded_df.writeStream.foreachBatch(showDf).outputMode(“append”).start()
query.awaitTermination()

Automatically generated PySpark snippet using the enhanced Kafka adapter for Eventstreams

Get started today

The Spark Notebook integration with Fabric Eventstreams is now available in Preview. Try it out and experience how easy real-time data processing can be in Microsoft Fabric. Here are some resources to help you get started:

Microsoft Fabric Eventstreams Overview – Microsoft Fabric | Microsoft Learn

How to use notebooks – Microsoft Fabric | Microsoft Learn

Real-Time Intelligence in Microsoft Fabric documentation – Microsoft Fabric | Microsoft Learn

We’d love your feedback

If you find this blog helpful, please give it a thumbs-up!
Have ideas for what you’d like to see next? Drop us a comment or reach out with suggestions—we’d love to hear what real-time scenarios you’re exploring and what topics you’d like us to cover in future posts.

Related blog posts

Bringing together Fabric Real-time Intelligence, Notebook and Spark Structured Streaming (Preview)

February 3, 2026 by Bogdan Crivat

As executives plan the next phase of their data and AI transformation, the bar for analytics infrastructure continues to rise. Enterprises are expected to support traditional business intelligence, increasingly complex analytics, and a new generation of AI-driven workloads—often on the same data, at the same time, and with far greater expectations for speed and cost … Continue reading “A turning point for enterprise data warehousing “

December 16, 2025 by Raki Rahman

Building a Petabyte-scale Data Platform with Fabric and SQL Telemetry and Intelligence Engineering team.