Troubleshooting Fabric Spark application without production workspace access
Problem Statement
You have Fabric Spark Notebooks deployed in a production workspace, but you don’t have direct access to it. The production support team reports that a Fabric Spark job has failed in the production workspace, and you need to analyze the logs to troubleshoot the issue.
Solution
To troubleshoot Spark applications, Spark engineers typically use the Spark UI, which provides details of Jobs, Stages, Storage, Environment, Executors, and SQL.
In most organizations, there are designated Fabric Spark developers who have contributor access to developer workspaces for the development and profiling of Spark applications. Testing workspaces are used by testing specialists to validate these applications, and developers typically do not have access to these environments. Once testing is complete, the Spark Notebooks are deployed to production, where only production support engineers have access to them. Production support engineers, however, are generally not Spark subject matter experts (SMEs), and they typically have read-only access to the production workspace.
If any Spark job in the production workspace requires investigation, production support engineers can download the logs from the Spark UI and share them with developers for further analysis. This approach follows the principle of least privileged access for ensuring security.

Developers who don’t have access to the workspace can set up a Spark History Server locally to view all the event logs. In this blog, you will learn how to configure a local history server to render the event logs.
Download Fabric Spark Event Log – for Product Support
Microsoft Fabric provides a Spark UI for ongoing and completed jobs, enabling you to download event logs.
Two ways to navigate to Spark UI
- While running a notebook interactively, you can access the Spark Web UI directly.

2. You can also navigate to the Monitoring Hub, select the application, and access Spark UI from the Spark History Server.

For more information on accessing the Spark History Server and its enhancements, refer to the official documentation.
If you are a product support engineer, download the Event Log from the Spark History Server and send it to the developer team. You can accomplish this even if you have viewer access to the Fabric workspace.

Next, we will see how to render this downloaded event log in local history server.
Configure Local Spark History Server – for Developers
If you already have Apache Spark installed on your machine, you can skip to the step 5. Otherwise, follow the instructions to configure on Windows OS.
- Download the latest version of Apache Spark from Downloads | Apache Spark

2. Navigate to the winutils repository and download the winutils.exe and hadoop.dll binaries. Place these files in the bin folder of your Spark directory.

3. Add the following to your system environment variables:
SPARK_HOME C:\spark-3.5.3-bin-hadoop3
HADOOP_HOME C:\spark-3.5.3-bin-hadoop3
Additionally, add %SPARK_HOME%\bin and %HADOOP_HOME%\bin to the Path variable, or directly add C:\spark-3.5.3-bin-hadoop3\bin to the Path.
4. Open Command Prompt and type spark-shell. This should launch the Spark Scala REPL, confirming that Spark is successfully installed.
Create a folder for storing event logs. For example, create eventlogs folder inside your Spark directory: C:\spark-3.5.3-bin-hadoop3\eventlogs.
Unzip the event log files from your downloaded event log zip file and place them in this folder.

6. Open the spark-defaults.conf.template file in C:\spark-3.5.3-bin-hadoop3\conf and add the following configurations.
spark.eventLog.enabled true
spark.eventLog.dir file:///C:/spark-3.5.3-bin-hadoop3/eventlogs
spark.history.fs.logDirectory file:///C:/spark-3.5.3-bin-hadoop3/eventlogs
7. Increasing Spark Daemon Memory (Optional): For applications with a large number of tasks (>100K tasks) or event logs exceeding 1 GB, increase the daemon memory (default is 1 GB). Create a file named spark-env.cmd in C:\spark-3.5.3-bin-hadoop3\conf and configure the daemon memory.
set SPARK_DAEMON_MEMORY=10g

8. In Command Prompt, run the following command to start the Spark History Server.
Spark-class org.apache.spark.deploy.history.HistoryServer --properties-file "C:\spark-3.5.3-bin-hadoop3\conf\spark-defaults.conf.template

9. Open your browser and navigate to localhost:18080. This will display the Spark History UI.

Additional Resources
Debug apps with the extended Apache Spark history server – Microsoft Fabric | Microsoft Learn
Apache Spark monitoring overview – Microsoft Fabric | Microsoft Learn