Stability Analysis Agent: AI-assisted crash log analysis toolchain (CLI, daemon, analyzers, agent)
Project description
Stability Analysis Agent
An AI Agent for App Stability — from crash log to root cause in one step
Crash · ANR · OOM · Freeze analysis | addr2line / atos symbolizer | LangGraph AI Agent | RAG knowledge base
English | 简体中文
Stability Analysis Agent is an open-source AI Agent purpose-built for app stability analysis — covering crashes, ANR (Application Not Responding), OOM (Out of Memory), freezes / watchdog kills, and more. Feed it a stability log, and it will parse, symbolize, extract code, reason about the root cause, and generate fix suggestions — automatically. Supports iOS, Android, macOS, Linux, and Windows with built-in addr2line / atos integration, LangGraph multi-turn reasoning, and a RAG knowledge base (ChromaDB).
Why not just paste the log into an AI coding tool?
General-purpose AI coding tools (Cursor, Copilot, Claude Code, etc.) can read a crash log, but they hit hard limits on stability analysis:
- Raw addresses are meaningless — AI tools cannot run
addr2line/atos; they see0x1a2b3cinstead ofMyClass::process() at main.cpp:42. - Stability logs are noisy — hundreds of system frames drown the real signal; without structured parsing, the LLM wastes tokens on irrelevant context.
- No domain memory — every conversation starts from scratch; learned patterns (crash signatures, ANR deadlock traces, OOM heuristics) are lost.
This Agent solves all three:
| AI Coding Tool | Stability Analysis Agent | |
|---|---|---|
| Address symbolization | Cannot run native tools | Built-in addr2line / atos integration |
| Log parsing | Sees raw text, high noise | Structured parser extracts signal, threads, key frames; classifies crash / ANR / OOM / freeze |
| Knowledge accumulation | Stateless, starts from zero | RAG: rule table + vector DB, patterns improve over time |
| Workflow | Single-prompt, one-shot | Multi-step Agent with conditional multi-turn reasoning |
| Extensibility | Prompt-only | Tool + Workflow plugin system, config-driven |
Agent Engine
Three execution modes to fit different needs:
| Mode | Engine | Best for |
|---|---|---|
| Direct | One-shot prompt assembly | Fast, simple, no framework dependency |
| LangChain | LangChain Agent | Flexible tool calling with chain-of-thought |
| LangGraph | LangGraph state machine | Multi-turn reasoning, the Agent can request more context and re-invoke tools |
Select via --engine direct|langchain|langgraph. All modes share the same tool chain and RAG knowledge base.
No LLM API key required to run the core toolchain (parsing + symbolization + code extraction). Plug in any OpenAI-compatible model (GPT, DeepSeek, ERNIE, GLM, etc.) when you're ready for AI analysis.
Key Features
| Feature | Description |
|---|---|
| Multi-Step AI Agent | LangGraph / LangChain / Direct — multi-turn reasoning with conditional branching |
| Address Symbolization | Resolves raw addresses to function names & line numbers via addr2line / atos |
| Structured Log Parsing | Auto-detects iOS / Android / macOS / Linux / Windows; classifies crash, ANR, OOM, freeze; extracts signal, threads, key frames |
| Source Code Context | Extracts code snippets around crash points |
| RAG Knowledge Base | Rule table (fast path) + vector retrieval (ChromaDB) with feedback loop |
| Tool + Workflow System | Pluggable architecture — register custom tools and workflows via config or decorators |
| Multiple Interfaces | CLI, HTTP Daemon (streaming / SSE), Python API |
Architecture
┌──────────┐ ┌──────────┐ ┌──────────┐
│ CLI │ │ Daemon │ │ Python │
│ │ │ (HTTP) │ │ API │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
└──────────────┼──────────────┘
│
┌─────────▼─────────┐
│ Tool + Workflow │
│ (tool_system) │
└─────────┬─────────┘
│
┌───────────────────────────┼───────────────────────────┐
│ │ │
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Crash Log │ │ Address │ │ Code │
│ Parser │ │ Symbolizer │ │ Provider │
└────────────┘ └────────────┘ └────────────┘
│
┌─────────▼─────────┐
│ AI Agent │
│ ┌─────────────┐ │
│ │ LangGraph │ │
│ │ State │ │
│ │ Machine │ │
│ └──────┬──────┘ │
│ │ │
│ ┌────▼────┐ │
│ │ RAG │ │
│ │ Rules + │ │
│ │ Vectors │ │
│ └────┬────┘ │
│ │ │
│ ┌────▼────┐ │
│ │ LLM │ │
│ └─────────┘ │
└───────────────────┘
Agent Pipeline:
Crash Log → Parse → Symbolize → Extract Code
↓
RAG (rules + vectors)
↓
LLM Reasoning ←──→ Request More Context (multi-turn)
↓
Fix Report
For detailed architecture diagrams, see docs/architecture.
Quick Start
Prerequisites
- Binary usage: no Python runtime required
- Source usage: Python 3.9+
- (Optional)
atos(macOS, built-in) oraddr2line(Linux, via binutils) for symbolization
1. Install via PyPI (Recommended)
# Install (for Mainland China, add -i https://pypi.tuna.tsinghua.edu.cn/simple)
pip install stability-analysis-agent
# Verify installation
sa-agent --help
# Initialize local config (interactive wizard for LLM keys, addr2line/atos paths, etc.)
sa-agent config init
# Check config completeness
sa-agent config doctor
Config files are saved in
~/.config/stability-analysis-agent/:
agent_config.local.json— LLM provider / API key / modeladd2line_resolver_config.local.json— addr2line / atos tool pathsEven without config initialization, you can run the full non-AI toolchain with
--skip-ai.The PyPI package includes full runtime dependencies (vector DB, tree-sitter, and LangGraph chain).
Upgrade with:
pip install -U stability-analysis-agent
2. Use Prebuilt CLI Binary (No Python Required)
Download the latest binary from GitHub Releases, then run:
# Example for v1.1.1 macOS arm64 package
unzip StabilityAnalyzer-v1.1.1-mac-arm64.zip
cd output/cli_release/stability_analyzer_cli/v1.1.1-mac-arm64
chmod +x StabilityAnalyzer
# If macOS Gatekeeper blocks launch (unsigned binary)
xattr -d com.apple.quarantine StabilityAnalyzer
./StabilityAnalyzer --help
# Optional: install a stable command name into ~/.local/bin (also ships in release zips)
chmod +x install.sh
./install.sh
# then: sa-agent --help
3. Developer Setup (from Source)
git clone https://github.com/baidu-maps/stability-analysis-agent.git
cd stability-analysis-agent
pip install -e .
pip install -e .is intended for development workflows. It also exposes thesa-agentcommand locally.
4. Run the Built-in Demo (No API Key Needed)
After installing via PyPI (pip install stability-analysis-agent) or from source (pip install -e .), clone the repo to get the bundled demo cases, then run:
sa-agent \
--crash-log examples/crash_cases/demo_basic/logs/mac/NullPtr_SIGSEGV_2026-04-08_10-43-08.crash \
--library-dir examples/crash_cases/demo_basic/lib/mac \
--code-root examples/crash_cases/demo_basic/code_dir \
--skip-ai
Output is saved to ./cli_reports/<timestamp>/ (under your current working directory) with structured JSON reports.
5. Analyze Your Own Crash Log
sa-agent \
--crash-log <your-crash-log> \
--library-dir <path-to-libs-and-symbols> \
--code-root <path-to-source-code>
Add
--skip-aito skip AI analysis, or--parse-onlyto only parse + symbolize.
CLI Options
| Flag | Required | Description |
|---|---|---|
--crash-log |
Yes | Path to the crash log file |
--library-dir |
Yes* | Directory with libraries (.dylib/.so) and debug symbols (.dSYM) |
--code-root |
No | Source code root for extracting code context |
--skip-ai |
No | Skip AI — run toolchain only (parser + resolver + code provider) |
--parse-only |
No | Parse + symbolize only (no --code-root needed) |
--parse-log-only |
No | Parse crash log only (no --library-dir needed) |
--daemon <url> |
No | Delegate to a running daemon instance |
* Not required when using --parse-log-only.
Daemon Mode
The daemon provides streaming output (SSE), process reuse (no cold start), and task cancellation — ideal for IDE integration and high-frequency analysis:
# Start the daemon
sa-agent --daemon-server --host 127.0.0.1 --port 8765
# Analyze via daemon
sa-agent --daemon http://127.0.0.1:8765 \
--crash-log <crash-log> --library-dir <lib-dir> --code-root <code-root>
See Daemon Server Guide for the full HTTP API reference.
Python API
from tool_system import (
ToolAndWorkflowRegistry, SystemConfig, WorkflowConfig,
ConfigDrivenExecutor, register_all_tools_and_workflows
)
registry = ToolAndWorkflowRegistry()
register_all_tools_and_workflows(registry)
config = SystemConfig(
workflows=[WorkflowConfig(name="crash_analysis", enabled=True)]
)
executor = ConfigDrivenExecutor(registry, config, llm_adapter=None)
result = executor.execute_workflow("crash_analysis", {
"crash_log": open("crash.crash").read(),
"library_dir": "./lib",
"code_root": "./code"
})
print(result)
LLM and Tool Configuration
AI analysis is optional. You can still run full non-AI toolchain with --skip-ai without any initialization.
For AI analysis and add2line customization after PyPI install, use:
sa-agent config init
sa-agent config path
sa-agent config doctor
Default local config directory:
~/.config/stability-analysis-agent/
agent_config.local.jsonfor LLM provider/key/modeladd2line_resolver_config.local.jsonfor addr2line/atos tool paths
If you choose manual editing in config init, edit these files directly in that directory.
Advanced: Environment overrides
You can still override config file locations via environment variables:
export STABILITY_AGENT_CONFIG_FILE="/abs/path/agent_config.local.json"
export STABILITY_AGENT_ADD2LINE_CONFIG_FILE="/abs/path/add2line_resolver_config.local.json"
Project Structure
stability-analysis-agent/
├── agent/ # AI Agent engine (LangGraph state machine)
├── cli/ # CLI entry point
├── daemon/ # HTTP daemon (streaming, SSE)
├── tools/ # Tool implementations (parser, resolver, code provider)
│ └── configs/ # Configuration templates
├── tool_system/ # Tool + Workflow registration & dispatch framework
├── workflows/ # Workflow definitions (crash analysis)
├── rag/ # RAG: rule store + vector index (ChromaDB) + metadata
├── prompts/ # Prompt templates for LLM analysis
├── protocol/ # Unified request/response protocol
├── examples/ # Bundled crash cases
│ └── crash_cases/
│ ├── demo_basic/ # NullPtr, DivZero, Abort, DoubleFree, etc.
│ └── demo_multithread/ # Race condition, deadlock, atomic failure, etc.
├── test/ # Test suite
└── docs/ # Documentation
Documentation
| Topic | Link |
|---|---|
| CLI Guide | docs/cli/CLI_GUIDE.md |
| CLI Commands Reference | docs/cli/CLI_COMMANDS_REFERENCE.md |
| Daemon Server Guide | docs/cli/DAEMON_SERVER_GUIDE.md |
| PyPI Release Scripts | docs/scripts/PYPI_RELEASE_SCRIPTS.md |
| System Architecture | docs/architecture/README.md |
| Architecture Diagram | docs/architecture/ARCHITECTURE_DIAGRAM.md |
| Tool System Overview | docs/tools/tool_system/TOOL_SYSTEM_OVERVIEW.md |
| Tool Extension Guide | docs/tools/tool_system/TOOL_SYSTEM_EXTENSION.md |
| Workflow System | docs/workflows/WORKFLOWS.md |
| RAG Vector Database | docs/rag/README.md |
| Crash Demos | docs/crash_demos/README.md |
Testing
# Regression tests
python3 test/tool_system/test_regression.py
# LLM connection test
python3 test/llm/test_llm_connection.py --provider openai
# Code content provider test
python3 test/agent_py_tool/test_code_content_provider.py
# Vector database test
python3 test/agent_py_tool/test_vector_db.py
FAQ
Q: Symbolization failed?
Ensure --library-dir contains the binary files (.dylib / .so) along with their debug symbols (.dSYM directories or DWARF info).
Q: LLM call failed?
Verify your API key is set correctly. Quick check: python3 test/llm/test_llm_connection.py --provider openai
Q: Code context extraction returns empty?
Ensure --code-root points to the source directory that contains the files listed in the symbolized stack trace.
Q: Can I use this without an LLM key?
Yes. Use --skip-ai to run the full toolchain (parse + symbolize + extract code). The structured JSON output is useful on its own for triage and debugging.
Contributing
Contributions are welcome! Please read CONTRIBUTING.md before submitting a PR.
# All commits require DCO sign-off
git commit -s -m "feat: describe your change"
License
Contact
| Channel | Link |
|---|---|
| GitHub Issues | Report a bug or request a feature |
| hong9988.dev@gmail.com |
Maintainer:
| Name | GitHub | |
|---|---|---|
| liuhong | @liuhong996 | hong9988.dev@gmail.com |
If this project helps you, please consider giving it a Star!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file stability_analysis_agent-1.1.1.tar.gz.
File metadata
- Download URL: stability_analysis_agent-1.1.1.tar.gz
- Upload date:
- Size: 792.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a676ae7a9a6b2fcce5a71dca78e299a45c83cfe83b6a6db26057c21d4e285aa
|
|
| MD5 |
f1217313b62e79a0c03d3788598e42b6
|
|
| BLAKE2b-256 |
6d939e3e8db2b8b8d069b459fc71beae0cd86d453900755fcac336cee1c6c202
|
File details
Details for the file stability_analysis_agent-1.1.1-py3-none-any.whl.
File metadata
- Download URL: stability_analysis_agent-1.1.1-py3-none-any.whl
- Upload date:
- Size: 254.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
989784099a3629145ad593346d9f6280862068c38a30416f8c25fa14973f4ecc
|
|
| MD5 |
4598c5fc00dc6a964787af5d9d02ba2e
|
|
| BLAKE2b-256 |
722ce89af48c50c0429c3e9fae039c085669670396ed27316d7ec1dd0589553e
|