JStall answers the age-old question: "What is my Java application doing right now?"
JStall is a small command-line tool for one-shot inspection of running JVMs using thread dumps and short, on-demand profiling.
Features:
- Deadlock detection: Find JVM-reported deadlocks quickly
- Hot thread identification: See which threads are doing the most work
- Thread activity categorization: Automatically classify threads by activity (I/O, Network, Database, etc.)
- Dependency graph: Visualize which threads wait on locks held by others
- Starvation detection: Find threads waiting on the same lock with no progress
- Intelligent stack filtering: Collapse framework internals, focus on application code
- Offline analysis: Analyze existing thread dumps
- Flamegraph generation: Short profiling runs with async-profiler
- Smart filtering: Target JVMs by name/class instead of PID
- Multi-execution: Analyze multiple JVMs in parallel for faster results
- JVM support checks: Warn if the target JVM is likely out of support (based on
java.version.datefromjcmd VM.system_properties) - Supports Java 11+: Works with all modern Java versions as a target, but requires Java 17+ to run
- AI-powered analysis: Get intelligent insights from thread dumps using LLMs (supports local models via Ollama)
- Record & Replay: Record diagnostic data for later analysis or sharing as a zip file
Requires Java 17+ to run.
Example: Find out what your application (in our example MyApplication with pid 12345) is doing right now
# Quick status check (checks for deadlocks and hot threads)
jstall 12345
# Or explicitly run the status command, that also supports using JVM name filters
jstall status MyApplication
# AI-powered analysis with intelligent insights
jstall ai 12345
# Analyze all JVMs on the system with AI
jstall ai full
# Find threads consuming most CPU
jstall most-work 12345
# Detect threads stuck waiting on locks
jstall waiting-threads 12345
# Show thread dependency graph (which threads wait on which)
jstall dependency-graph 12345
# Generate a flamegraph
jstall flame 12345When analyzing a live JVM (by PID or filter), JStall also collects jcmd VM.system_properties and checks java.version.date:
- <= 4 months old: emit nothing (JVM is considered "young enough")
- > 4 months old: show a hint that you might want to update
- > 1 year old: treat as totally outdated and return a non-zero exit code
This is meant as a lightweight guardrail: if your JVM is very old, analysis results and production behavior can be misleading.
Download the latest executable from the releases page.
Or use with JBang: jbang jstall@parttimenerd/jstall <pid>
> jstall --help
Usage: jstall [-hV] [COMMAND]
One-shot JVM inspection tool
-h, --help Show this help message and exit.
-V, --version Print version information and exit.
Commands:
status Run multiple analyzers over thread dumps (default command)
jvm-support Check whether the target JVM is likely still supported (based on java.version.date)
deadlock Detect JVM-reported thread deadlocks
most-work Identify threads doing the most work across dumps
flame Generate a flamegraph of the application using async-profiler
threads List all threads sorted by CPU time
waiting-threads Identify threads waiting without progress (potentially
starving)
dependency-graph Show thread dependencies (lock wait relationships)
ai AI-powered analysis using LLM
ai full AI-powered analysis of all JVMs on the system
list List running JVM processes (excluding this tool)Add the following dependency to your pom.xml:
<dependency>
<groupId>me.bechberger</groupId>
<artifactId>jstall</artifactId>
<version>0.4.11</version>
</dependency>Use filter strings to match JVMs by main class name instead of PIDs:
jstall list MyApp # List matching JVMs
jstall status MyApplication # Analyze matching JVMs
jstall deadlock kafka # Check deadlocks in matching JVMsHow it works: Filter strings match main class names (case-insensitive). When multiple JVMs match, they're analyzed in parallel with results sorted by PID.
Note: flame requires exactly one JVM (fails if filter matches multiple).
Usage: jstall list [-hV] [<filter>]
List running JVM processes (excluding this tool)
[<filter>] Optional filter - only show JVMs whose main class contains
this text
-h, --help Show this help message and exit.
-V, --version Print version information and exit.
Example:
> jstall list kafka
67890 org.apache.kafka.KafkaExit codes: 0 = JVMs found, 1 = no JVMs found
Runs multiple analyzers (deadlock, most-work, threads, dependency-graph) over shared thread dumps.
Usage: jstall status [-hV] [--top=<top>] [--no-native] [--dumps=<dumps>]
[--interval=<interval>] [--keep] [--intelligent-filter] [<targets>...]
Run multiple analyzers over thread dumps (default command)
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is none
-h, --help Show this help message and exit.
--intelligent-filter Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval>
Interval between dumps, default is 5s
--keep Persist dumps to disk
--no-native Ignore threads without stack traces (typically
native/system threads)
--top=<top> Number of top threads (default: 3)
-V, --version Print version information and exit.
Exit codes:
0= no issues2= deadlock detected10= JVM is totally outdated (> 1 year based onjava.version.date)
Note: Supports multiple targets analyzed in parallel.
Checks whether the target JVM is reasonably up-to-date based on java.version.date from jcmd VM.system_properties.
Usage: jstall jvm-support [-hV] [--dumps=<dumps>] [--interval=<interval>] [--keep]
[--intelligent-filter] [<targets>...]
Exit codes:
0= JVM is supported / only mildly outdated10= JVM is totally outdated (> 1 year based onjava.version.date)
Usage: jstall most-work [-hV] [--top=<top>] [--no-native]
[--stack-depth=<stackDepth>] [--dumps=<dumps>] [--interval=<interval>] [--keep]
[--intelligent-filter] [<targets>...]
Identify threads doing the most work across dumps
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is none
-h, --help Show this help message and exit.
--intelligent-filter Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval> Interval between dumps, default is 5s
--keep Persist dumps to disk
--no-native Ignore threads without stack traces (typically
native/system threads)
--stack-depth=<stackDepth>
Stack trace depth to show (default: 10, 0=all,
in intelligent mode: max relevant frames)
--top=<top> Number of top threads to show (default: 3)
-V, --version Print version information and exit.
Shows CPU time, CPU percentage, core utilization, state distribution, and activity categorization for top threads.
Usage: jstall deadlock [-hV] [--dumps=<dumps>] [--interval=<interval>] [--keep]
[--intelligent-filter] [<targets>...]
Detect JVM-reported thread deadlocks
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is none
-h, --help Show this help message and exit.
--intelligent-filter Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval>
Interval between dumps, default is 5s
--keep Persist dumps to disk
-V, --version Print version information and exit.
Exit codes: 0 = no deadlock, 2 = deadlock detected
Lists all threads sorted by CPU time in a table format.
Usage: jstall threads [-hV] [--no-native] [--dumps=<dumps>]
[--interval=<interval>] [--keep] [--intelligent-filter] [<targets>...]
List all threads sorted by CPU time
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is none
-h, --help Show this help message and exit.
--intelligent-filter Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval>
Interval between dumps, default is 5s
--keep Persist dumps to disk
--no-native Ignore threads without stack traces (typically
native/system threads)
-V, --version Print version information and exit.
Shows thread name, CPU time, CPU %, state distribution, activity categorization, and top stack frame.
Identifies threads waiting on the same lock instance across all dumps with no CPU progress.
Usage: jstall waiting-threads [-hV] [--no-native] [--stack-depth=<stackDepth>]
[--dumps=<dumps>] [--interval=<interval>] [--keep] [--intelligent-filter]
[<targets>...]
Identify threads waiting without progress (potentially starving)
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is none
-h, --help Show this help message and exit.
--intelligent-filter Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval> Interval between dumps, default is 5s
--keep Persist dumps to disk
--no-native Ignore threads without stack traces (typically
native/system threads)
--stack-depth=<stackDepth>
Stack trace depth to show (1=inline, 0=all,
default: 1, in intelligent mode: max relevant
frames)
-V, --version Print version information and exit.
Detection criteria: Thread in ALL dumps, WAITING/TIMED_WAITING state, CPU ≤ 0.0001s, same lock instance.
Highlights lock contention when multiple threads are blocked on the same lock.
Shows thread dependencies by visualizing which threads wait on locks held by other threads.
Usage: jstall dependency-graph [-hV] [--keep] [--dumps=<dumps>]
[--interval=<interval>] [<targets>...]
Show thread dependencies (which threads wait on locks held by others)
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is 2
-h, --help Show this help message and exit.
--interval=<interval>
Interval between dumps, default is 5s
--keep Persist dumps to disk
-V, --version Print version information and exit.
Features:
- Shows which threads wait on locks held by others
- Categorizes threads by activity (I/O, Network, Database, Computation, etc.)
- Detects dependency chains (A waits on B, B waits on C, etc.)
- Displays thread states and CPU times
- Uses the latest dump when multiple dumps are provided
Example Output:
Thread Dependency Graph
======================
[I/O Write] file-writer
→ [Network] netty-worker-1 (lock: <0xBBBB>)
Waiter state: BLOCKED, CPU: 2.10s
Owner state: BLOCKED, CPU: 5.20s
[Database] jdbc-connection-pool
→ [I/O Write] file-writer (lock: <0xAAAA>)
Waiter state: BLOCKED, CPU: 15.70s
Owner state: BLOCKED, CPU: 2.10s
Summary:
--------
Total waiting threads: 2
Total dependencies: 2
Dependency Chains Detected:
---------------------------
Chain: [Database] jdbc-connection-pool → [I/O Write] file-writer → [Network] netty-worker-1
AI-powered thread dump analysis using a Large Language Model (LLM). Combines status analysis with intelligent AI interpretation.
Usage: jstall ai [-hV] [--dry-run] [--intelligent-filter] [--keep] [--no-native]
[--raw] [--dumps=<dumps>] [--interval=<interval>]
[--model=<model>] [--question=<question>]
[--stack-depth=<stackDepth>] [--top=<top>] [<targets>...]
AI-powered thread dump analysis using LLM
[<targets>...] PID, filter or dump files
--dumps=<dumps> Number of dumps to collect, default is 2
--dry-run Perform a dry run without calling the AI API
-h, --help Show this help message and exit.
--intelligent-filter
Use intelligent stack trace filtering (collapses
internal frames, focuses on application code)
--interval=<interval>
Interval between dumps, default is 5s
--keep Persist dumps to disk
--model=<model> LLM model to use (default: gpt-50-nano)
--no-native Ignore threads without stack traces (typically
native/system threads)
--question=<question>
Custom question to ask (use '-' to read from stdin)
--raw Output raw JSON response
--stack-depth=<stackDepth>
Stack trace depth to show (default: 10, 0=all, in
intelligent mode: max relevant frames)
--top=<top> Number of top threads (default: 3)
-V, --version Print version information and exit.
Features:
- Runs comprehensive status analysis (deadlocks, hot threads, dependency graph)
- Sends analysis to LLM for intelligent interpretation
- Provides natural language insights and recommendations
- Supports custom questions about the thread dumps
- Intelligent filtering enabled by default
Setup:
Option 1: Local Models (Ollama)
Run AI analysis with local models for privacy and no API costs:
- Install Ollama
- Pull a model:
ollama pull qwen3:30b - Create
.jstall-ai-configin your home directory or current directory:provider=ollama model=qwen3:30b ollama.host=http://127.0.0.1:11434
Note: Ensure the Ollama server is running before using JStall with local models.
It takes some time to start the model on first use; subsequent calls are faster, therefore:
OLLAMA_KEEP_ALIVE=1000m0s OLLAMA_CONTEXT_LENGTH=32000 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_HOST=http://127.0.0.1:11434 ollama serveAnd also prime the model with a dummy request:
ollama chat qwen3:30b --prompt "Hello"The qwen3:30b model is recommended for best results, but others can be used as well.
It takes 18GB of RAM when loaded and runs reasonably fast on my MacBook Pro M4 48GB.
Maybe also gpt-oss:20b works.
Option 2: Gardener AI (Remote)
Use the Gardener AI API service:
- Create a
.gawfile containing your API key in one of these locations:- Current directory:
./.gaw - Home directory:
~/.gaw - Or set environment variable:
ANSWERING_MACHINE_APIKEY
- Current directory:
- Or configure in
.jstall-ai-config:provider=gardener model=gpt-50-nano api.key=your-api-key-here
Note: Ollama supports true token-by-token streaming and thinking mode (--thinking), while Gardener AI returns complete responses.
Examples:
# Basic AI analysis (uses config from .jstall-ai-config)
jstall ai 12345
# Use local Ollama (override config)
jstall ai --local 12345
# Use remote Gardener AI (override config)
jstall ai --remote 12345
# Basic AI analysis with short summary at the end
jstall ai 12345 --short
# Show thinking process (Ollama only - displays model's reasoning)
jstall ai 12345 --thinking
# Use local model with thinking mode
jstall ai --local --thinking 12345
# Ask a specific question
jstall ai 12345 --question "Why is my application slow?"
# Read question from stdin
echo "What's causing high memory usage?" | jstall ai 12345 --question -
# Dry run to see the prompt without API call
jstall ai 12345 --dry-run
# Use a different model (override config)
jstall ai 12345 --model qwen3:30bExit codes: 0 = success, 2 = API key not found, 4 = authentication failed, 5 = API error, 3 = network error
Analyzes all active JVMs on the system with AI-powered insights. Discovers running JVMs, analyzes those using CPU, and provides system-wide analysis.
Usage: jstall ai full [-hV] [--dry-run] [--intelligent-filter] [--no-native]
[--raw] [--cpu-threshold=<cpuThreshold>]
[-i=<interval>] [--model=<model>] [-n=<dumps>]
[--question=<question>] [--stack-depth=<stackDepth>]
[--top=<top>]
Analyze all JVMs on the system with AI
--cpu-threshold=<cpuThreshold>
CPU threshold percentage (default: 1.0%)
--dry-run Perform a dry run without calling the AI API
-h, --help Show this help message and exit.
-i, --interval=<interval>
Interval between dumps in seconds (default: 1)
--intelligent-filter
Enable intelligent stack filtering (default: true)
--model=<model> LLM model to use (default: gpt-50-nano)
-n, --dumps=<dumps> Number of dumps per JVM (default: 2)
--no-native Ignore threads without stack traces
--question=<question>
Custom question to ask (use '-' to read from stdin)
--raw Output raw JSON response
--stack-depth=<stackDepth>
Stack trace depth (default: 10, 0=all)
--top=<top> Number of top threads per JVM (default: 3)
-V, --version Print version information and exit.
How it works:
- Discovers all JVMs on the system
- Collects thread dumps from each JVM (in parallel)
- Filters JVMs by CPU usage (default: >1% of interval time)
- Runs status analysis on each active JVM
- Sends combined analysis to AI for system-wide insights
Output structure:
- High-level summary of overall system state
- Cross-JVM issues, bottlenecks, or patterns
- Individual analysis sections for each JVM
Examples:
# Analyze all active JVMs on the system
jstall ai full
# Lower CPU threshold to include more JVMs
jstall ai full --cpu-threshold 0.5
# Focus on specific concern
jstall ai full --question "Which JVMs have memory leak indicators?"
# Dry run to see what would be analyzed
jstall ai full --dry-run
# More comprehensive analysis with more dumps
jstall ai full -n 5 -i 2Use cases:
- Production environment health check
- Microservices ecosystem analysis
- Identify system-wide bottlenecks
- Cross-service dependency issues
- Resource usage patterns across multiple JVMs
Exit codes: Same as ai command
You can record diagnostic data with
jstall record <all|pid> --output <recording.zip>and replay it on any machine using the -f/--file option. For an example recording in this repository see the folder: 6529/
Replay examples (use your recording ZIP file):
# Replay a recording and run the status analyzers
jstall -f <recording.zip> status
# Replay a recording and list threads
jstall -f <recording.zip> threadsWhen replaying, JStall will use the recorded data files instead of querying a live JVM. If a tool needs additional data that wasn't recorded, it will skip that analysis and continue with available information.
JStall automatically categorizes threads by their activity based on stack trace analysis:
Categories:
- Network Read/Write — Socket operations, accept calls
- Network — Selectors, polling, Netty, file system monitoring
- I/O Read/Write — File input/output operations
- I/O — General file I/O
- Database — JDBC and SQL operations
- External Process — Process handling, waiting on external processes
- Lock Wait — Threads waiting on locks/monitors
- Sleep — Thread.sleep() calls
- Park — LockSupport.park() calls
- Computation — Active computation
- Unknown ��� Unrecognized activity
Categories appear in most-work, threads, and dependency-graph command outputs.
Example:
1. netty-worker-1
CPU time: 10.50s (45.0% of total)
States: RUNNABLE: 100.0%
Activity: Network
Common stack prefix:
sun.nio.ch.KQueue.poll(Native Method)
...
Use --intelligent-filter to automatically collapse framework internals and focus on application code and important operations.
Available on: most-work, waiting-threads, status
What it does:
- Collapses JDK internals, reflection, proxies, generated code
- Preserves application code
- Keeps important operations visible: I/O, Network, Database, Threading
- Respects
--stack-depthfor relevant frames (not total frames)
Example:
# Show top threads with clean stack traces
jstall most-work 12345 --intelligent-filter --stack-depth 15
# Analyze waiting threads with focused stack traces
jstall waiting-threads 12345 --intelligent-filter --stack-depth 10Normal output:
Stack:
at com.example.MyController.handleRequest(MyController.java:42)
at jdk.internal.reflect.GeneratedMethodAccessor123.invoke(Unknown Source)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:205)
... (15 more frames)
With --intelligent-filter:
Stack:
at com.example.MyController.handleRequest(MyController.java:42)
... (3 internal frames omitted)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
at com.example.Service.processRequest(Service.java:78)
at java.sql.Connection.executeQuery(Connection.java:100)
at com.example.Repository.findUser(Repository.java:45)
Checks whether there are any processes running on the system that take a high amount of CPU. Helpful to identify e.g. a virus scanner or other interfering processes that use more than 20% of the available CPU-time. Also report if non-own processes are consuming more than 40% of CPU time. In either of these cases, list all processes with a CPU usage above 1% of CPU time.
Usage: jstall processes [-hV] [--cpu-threshold=<cpuThreshold>]
[--own-process-cpu-threshold=<ownProcessCpuThreshold>]
Generates a flamegraph using async-profiler.
Usage: jstall flame [-hV] [--output=<outputFile>] [--duration=<duration>]
[--event=<event>] [--interval=<interval>] [--open] [<target>]
Generate a flamegraph of the application using async-profiler
[<target>] PID or filter (filters JVMs by main class name)
-d, --duration=<duration> Profiling duration (default: 10s), default is 10s
-e, --event=<event> Profiling event (default: cpu). Options: cpu,
alloc, lock, wall, itimer
-h, --help Show this help message and exit.
-i, --interval=<interval> Sampling interval (default: 10ms), default is
10ms
-o, --output=<outputFile> Output HTML file (default: flame.html)
--open Automatically open the generated HTML file in
browser
-V, --version Print version information and exit.
Note: Filter must match exactly one JVM. Uses async-profiler.
mvn clean packagebin/sync-documentation.py is used to synchronize the CLI help messages into this README.
release.sh is a helper script to create new releases.
Extend this tool by adding new analyzers. You can do this by implementing an analysis, creating a new command, and adding it to the main CLI class (and adding the analysis optionally to the status command). Please also update the README accordingly.
This project is open to feature requests/suggestions, bug reports etc. via GitHub issues. Contribution and feedback are encouraged and always welcome.
MIT, Copyright 2025 SAP SE or an SAP affiliate company, Johannes Bechberger and contributors