Microsoft Fabric Updates Blog

Announcing the preview of the REST API for Livy for Data Engineering.

The Fabric Livy endpoint lets users submit and execute their Spark code on the Spark compute within a designated Fabric workspace, eliminating the need to create a Notebook or Spark Job Definition artifacts. This integration with a specific Lakehouse artifact ensures straightforward access to data stored on OneLake. Additionally, Livy API offers the ability to customize the execution environment through its integration with the Environment artifact.

When a request is sent to the Fabric Livy endpoint, the user-submitted code can be executed in two different modes:

Session Job:

  • A Livy session job entails establishing a Spark session that remains active throughout the interaction with the Livy API. This is particularly useful for interactive and iterative workloads.
  • A Spark session starts when a job is submitted and lasts until the user ends it or the system terminates it after 20 minutes of inactivity. Throughout the session, multiple jobs can run, sharing state and cached data between runs.

Batch Job:

  • A Livy batch job entails submitting a Spark application for a single execution. In contrast to a Livy session job, a batch job does not sustain an ongoing Spark session.
  • With Livy batch jobs each job initiates a new Spark session, which ends when the job finishes. This approach works well for tasks that don’t rely on previous computations or require maintaining state between jobs.

The endpoint of the session job API would look like: https://api.fabric.microsoft.com/v1/workspaces/ws_id/lakehouses/lakehouse_id /livyapi/versions/2023-12-01/ batches

The endpoint of the session job API would look like: https://api.fabric.microsoft.com/v1/workspaces/ws_id/lakehouses/lakehouse_id /livyapi/versions/2023-12-01/sessions

ws_id: this is the workspace in which the hosting Lakehouse artifact belongs to. lakehouse_id: this is the artifact id of the hosting Lakehouse. All the capacity consumption history from this batch API call will be associated with this artifact.

To access the Livy API endpoint, you need to create a Lakehouse artifact. Once it’s set up, you’ll find the Livy API endpoint in the settings panel.

Lakehouse settings showing the Livy endpoint

Our Livy API documentation provides more details and shows you how to create your first Livy batch or session job.

Related blog posts

Announcing the preview of the REST API for Livy for Data Engineering.

June 12, 2025 by RK Iyer

Introduction Whether you’re building analytics pipelines or conversational AI systems, the risk of exposing sensitive data is real. AI models trained on unfiltered datasets can inadvertently memorize and regurgitate PII, leading to compliance violations and reputational damage. This blog explores how to build scalable, secure, and compliant data workflows using PySpark, Microsoft Presidio, and Faker—covering … Continue reading “Privacy by Design: PII Detection and Anonymization with PySpark on Microsoft Fabric”

June 11, 2025 by Eren Orbey

Earlier this year, we released AI functions in public preview, allowing Fabric customers to apply LLM-powered transformations to OneLake data simply and seamlessly, in a single line of code. Since then, we’ve continued iterating on AI functions in response to your feedback. Let’s explore the latest updates, which make AI functions more powerful, more cost-effective, … Continue reading “Introducing upgrades to AI functions for better performance—and lower costs”