Bringing together Fabric Real-time Intelligence, Notebook and Spark Structured Streaming (Preview)
Coauthored by QiXiao Wang
Building event-driven, real-time applications using Fabric Eventstreams and Spark Notebooks just got a whole lot easier. With the Preview of Spark Notebooks and Real-Time Intelligence integration — a new capability that brings together the open-source community supported richness of Spark Structured Streaming with the real-time stream processing power of Fabric Eventstreams — developers can now build low-latency, end-to-end real-time analytics and AI pipelines all within Microsoft Fabric.
You can now seamlessly access streaming data from Eventstreams directly inside Spark notebooks, enabling real-time insights and decision-making without the complexity & tediousness of manual coding and configuration.
Why should you care?
Real-time data is at the heart of modern analytics and AI. If you have ever struggled with stitching together streaming sources, managing secrets, writing and debugging streaming logic, this release changes the game. We have simplified the experience so you can focus on building solutions, not managing boiler-plate code and infrastructure.
Here’s what you can do with these new capabilities:
Discover real-time sources instantly
Explore Eventstreams and other real-time sources through the Real-Time hub, right from within your Fabric notebooks. No more searching for connection details; everything you need is at your fingertips. You can also create new Eventstreams and start ingesting data from nearly 30 (and growing) streaming sources including CDC-enabled databases, message brokers, streaming services and public feeds.
Example scenario:
Building a fraud detection pipeline? Quickly locate the Eventstream carrying the latest transaction data and start processing it using Spark Structured Streaming without ever leaving the Fabric Notebook experience.

Connect and start processing in minutes
Kickstart your streaming workflows with auto-generated PySpark code snippets. Whether you’re ingesting data or applying transformations, these snippets help you go from zero to streaming in record time. Just click on streams in the Explorer and choose to “Read with Spark”. This automatically generates a PySpark code snippet that has all the boiler plate code to read from the source Eventstream and write the results to “console”. You can now start to add complex business logic and debug using familiar Python/SQL.
Example scenario:
Need to enrich IoT sensor data with historical context for predictive maintenance? Connect to Eventstream and data in your Lakehouse, and start processing within minutes using secure, auto-generated PySpark code.

Reuse existing Notebooks
If your team already has Notebooks built for prototyping or testing, you can now bring them directly into your Eventstream as operational streaming processors. This lets you extend the life of existing assets, reduce duplication, and accelerate development by reusing logic that already works. With seamless notebook loading, you can evolve existing workflows into full production‑grade streaming pipelines with minimal refactoring.
Example scenario:
Your data science team has already built Notebooks for real-time anomaly detection. Use them directly from an Eventstream, adding advanced ML models for deeper insights.

Secure, seamless connectivity
Forget connection strings and secrets in PySpark code. The enhanced Fabric-optimized, Apache Kafka-based Spark adapter for Eventstreams ensures secure, frictionless connectivity between Fabric Spark jobs and any Eventstream — so your data stays protected while your pipelines run fast. Just specify Eventstream ID and the default/derived stream datasource id and enhanced Kafka adapter takes care of the rest. It validates (using EntraID) and uses the logged in Notebook users token to authorize access to the Eventstream, retrieves the connection details and establishes a secure connection to the Eventstream. This removes a major operational burden while ensuring your pipelines stay fast, reliable, and secure by default.
Example scenario:
Working with sensitive financial data? Built-in security including no-secrets in code means compliance without extra effort.
| from pyspark.sql import SparkSession eventstream_options = { # Read from Kafka using the config map decoded_df = df_raw.select( def showDf(x:DataFrame, y:int): # Print messages to the console |
Get started today
The Spark Notebook integration with Fabric Eventstreams is now available in Preview. Try it out and experience how easy real-time data processing can be in Microsoft Fabric. Here are some resources to help you get started:
Microsoft Fabric Eventstreams Overview – Microsoft Fabric | Microsoft Learn
How to use notebooks – Microsoft Fabric | Microsoft Learn
Real-Time Intelligence in Microsoft Fabric documentation – Microsoft Fabric | Microsoft Learn
We’d love your feedback
If you find this blog helpful, please give it a thumbs-up!
Have ideas for what you’d like to see next? Drop us a comment or reach out with suggestions—we’d love to hear what real-time scenarios you’re exploring and what topics you’d like us to cover in future posts.