Skip to main content

Stability Analysis Agent: AI-assisted crash log analysis toolchain (CLI, daemon, analyzers, agent)

Project description

Stability Analysis Agent

An AI Agent for App Stability — from crash log to root cause in one step

License Python PRs Welcome

English | 简体中文


Stability Analysis Agent is an AI Agent purpose-built for app crash analysis. Feed it a crash log, and it will parse, symbolize, extract code, reason about the root cause, and generate fix suggestions — automatically.

Why not just paste the crash log into an AI coding tool?

General-purpose AI coding tools (Cursor, Copilot, Claude Code, etc.) can read a crash log, but they hit hard limits on stability analysis:

  • Raw addresses are meaningless — AI tools cannot run addr2line / atos; they see 0x1a2b3c instead of MyClass::process() at main.cpp:42.
  • Crash logs are noisy — hundreds of system frames drown the real signal; without structured parsing, the LLM wastes tokens on irrelevant context.
  • No domain memory — every conversation starts from scratch; learned crash patterns are lost.

This Agent solves all three:

AI Coding Tool Stability Analysis Agent
Address symbolization Cannot run native tools Built-in addr2line / atos integration
Log parsing Sees raw text, high noise Structured parser extracts signal, threads, key frames
Knowledge accumulation Stateless, starts from zero RAG: rule table + vector DB, patterns improve over time
Workflow Single-prompt, one-shot Multi-step Agent with conditional multi-turn reasoning
Extensibility Prompt-only Tool + Skill plugin system, config-driven

Agent Engine

Three execution modes to fit different needs:

Mode Engine Best for
Direct One-shot prompt assembly Fast, simple, no framework dependency
LangChain LangChain Agent Flexible tool calling with chain-of-thought
LangGraph LangGraph state machine Multi-turn reasoning, the Agent can request more context and re-invoke tools

Select via --engine direct|langchain|langgraph. All modes share the same tool chain and RAG knowledge base.

No LLM API key required to run the core toolchain (parsing + symbolization + code extraction). Plug in any OpenAI-compatible model (GPT, DeepSeek, ERNIE, GLM, etc.) when you're ready for AI analysis.

Key Features

Feature Description
Multi-Step AI Agent LangGraph / LangChain / Direct — multi-turn reasoning with conditional branching
Address Symbolization Resolves raw addresses to function names & line numbers via addr2line / atos
Structured Crash Parsing Auto-detects iOS / Android / macOS / Linux / Windows; extracts signal, threads, key frames
Source Code Context Extracts code snippets around crash points
RAG Knowledge Base Rule table (fast path) + vector retrieval (ChromaDB) with feedback loop
Tool + Skill System Pluggable architecture — register custom tools and skills via config or decorators
Multiple Interfaces CLI, HTTP Daemon (streaming / SSE), Python API

Architecture

                  ┌──────────┐   ┌──────────┐   ┌──────────┐
                  │   CLI    │   │  Daemon  │   │  Python  │
                  │          │   │  (HTTP)  │   │   API    │
                  └────┬─────┘   └────┬─────┘   └────┬─────┘
                       │              │              │
                       └──────────────┼──────────────┘
                                      │
                            ┌─────────▼─────────┐
                            │   Tool + Skill    │
                            │   (tool_system)   │
                            └─────────┬─────────┘
                                      │
          ┌───────────────────────────┼───────────────────────────┐
          │                           │                           │
          ▼                           ▼                           ▼
   ┌────────────┐            ┌────────────┐            ┌────────────┐
   │  Crash Log │            │  Address   │            │    Code    │
   │   Parser   │            │ Symbolizer │            │  Provider  │
   └────────────┘            └────────────┘            └────────────┘
                                      │
                            ┌─────────▼─────────┐
                            │    AI Agent       │
                            │  ┌─────────────┐  │
                            │  │  LangGraph  │  │
                            │  │  State      │  │
                            │  │  Machine    │  │
                            │  └──────┬──────┘  │
                            │         │         │
                            │    ┌────▼────┐    │
                            │    │   RAG   │    │
                            │    │ Rules + │    │
                            │    │ Vectors │    │
                            │    └────┬────┘    │
                            │         │         │
                            │    ┌────▼────┐    │
                            │    │   LLM   │    │
                            │    └─────────┘    │
                            └───────────────────┘

Agent Pipeline:

Crash Log → Parse → Symbolize → Extract Code
                                      ↓
                              RAG (rules + vectors)
                                      ↓
                                LLM Reasoning ←──→ Request More Context (multi-turn)
                                      ↓
                                 Fix Report

For detailed architecture diagrams, see docs/architecture.

Quick Start

Prerequisites

  • Binary usage: no Python runtime required
  • Source usage: Python 3.9+
  • (Optional) atos (macOS, built-in) or addr2line (Linux, via binutils) for symbolization

1. Use Prebuilt CLI Binary (Recommended for End Users)

Download the latest binary from GitHub Releases, then run:

# Example for v1.0.0 macOS arm64 package
unzip StabilityAnalyzer-v1.0.0-mac-arm64.zip
cd output/cli_release/stability_analyzer_cli/v1.0.0-mac-arm64

chmod +x StabilityAnalyzer

# If macOS Gatekeeper blocks launch (unsigned binary)
xattr -d com.apple.quarantine StabilityAnalyzer

./StabilityAnalyzer --help

# Optional: install a stable command name into ~/.local/bin (also ships in release zips)
chmod +x install.sh
./install.sh
# then: sa-agent --help

2. Install via PyPI (Recommended for Python Users)

pip install stability-analysis-agent
sa-agent --help

Initialize local config (recommended):

sa-agent config init
sa-agent config doctor

The PyPI package includes full runtime dependencies (vector DB, tree-sitter, and LangGraph chain).

For users in Mainland China, if the default PyPI index is slow, install with a mirror:

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple stability-analysis-agent

Optional (persist pip mirror globally):

pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

Upgrade with: pip install -U stability-analysis-agent

3. Developer Setup (from Source)

git clone https://github.com/baidu-maps/stability-analysis-agent.git
cd stability-analysis-agent
pip install -e .

pip install -e . is intended for development workflows. It also exposes the sa-agent command locally.

4. Run the Built-in Demo (No API Key Needed)

After installing via PyPI (pip install stability-analysis-agent) or from source (pip install -e .), clone the repo to get the bundled demo cases, then run:

sa-agent \
  --crash-log examples/crash_cases/demo_basic/logs/mac/NullPtr_SIGSEGV_2026-04-08_10-43-08.crash \
  --library-dir examples/crash_cases/demo_basic/lib/mac \
  --code-root examples/crash_cases/demo_basic/code_dir \
  --skip-ai

Output is saved to ./cli_reports/<timestamp>/ (under your current working directory) with structured JSON reports.

5. Analyze Your Own Crash Log

sa-agent \
  --crash-log <your-crash-log> \
  --library-dir <path-to-libs-and-symbols> \
  --code-root <path-to-source-code>

Add --skip-ai to skip AI analysis, or --parse-only to only parse + symbolize.

CLI Options

Flag Required Description
--crash-log Yes Path to the crash log file
--library-dir Yes* Directory with libraries (.dylib/.so) and debug symbols (.dSYM)
--code-root No Source code root for extracting code context
--skip-ai No Skip AI — run toolchain only (parser + resolver + code provider)
--parse-only No Parse + symbolize only (no --code-root needed)
--parse-log-only No Parse crash log only (no --library-dir needed)
--daemon <url> No Delegate to a running daemon instance

* Not required when using --parse-log-only.

Daemon Mode

The daemon provides streaming output (SSE), process reuse (no cold start), and task cancellation — ideal for IDE integration and high-frequency analysis:

# Start the daemon
sa-agent --daemon-server --host 127.0.0.1 --port 8765

# Analyze via daemon
sa-agent --daemon http://127.0.0.1:8765 \
  --crash-log <crash-log> --library-dir <lib-dir> --code-root <code-root>

See Daemon Server Guide for the full HTTP API reference.

Python API

from tool_system import (
    ToolAndSkillRegistry, SystemConfig, SkillConfig,
    ConfigDrivenExecutor, register_all_tools_and_skills
)

registry = ToolAndSkillRegistry()
register_all_tools_and_skills(registry)

config = SystemConfig(
    skills=[SkillConfig(name="crash_analysis", enabled=True)]
)
executor = ConfigDrivenExecutor(registry, config, llm_adapter=None)

result = executor.execute_skill("crash_analysis", {
    "crash_log": open("crash.crash").read(),
    "library_dir": "./lib",
    "code_root": "./code"
})
print(result)

LLM and Tool Configuration

AI analysis is optional. You can still run full non-AI toolchain with --skip-ai without any initialization.

For AI analysis and add2line customization after PyPI install, use:

sa-agent config init
sa-agent config path
sa-agent config doctor

Default local config directory:

~/.config/stability-analysis-agent/
  • agent_config.local.json for LLM provider/key/model
  • add2line_resolver_config.local.json for addr2line/atos tool paths

If you choose manual editing in config init, edit these files directly in that directory.

Advanced: Environment overrides

You can still override config file locations via environment variables:

export STABILITY_AGENT_CONFIG_FILE="/abs/path/agent_config.local.json"
export STABILITY_AGENT_ADD2LINE_CONFIG_FILE="/abs/path/add2line_resolver_config.local.json"

Project Structure

stability-analysis-agent/
├── agent/              # AI Agent engine (LangGraph state machine)
├── cli/                # CLI entry point
├── daemon/             # HTTP daemon (streaming, SSE)
├── tools/              # Tool implementations (parser, resolver, code provider)
│   └── configs/        # Configuration templates
├── tool_system/        # Tool + Skill registration & dispatch framework
├── skills/             # Skill definitions (crash analysis)
├── rag/                # RAG: rule store + vector index (ChromaDB) + metadata
├── prompts/            # Prompt templates for LLM analysis
├── protocol/           # Unified request/response protocol
├── examples/           # Bundled crash cases
│   └── crash_cases/
│       ├── demo_basic/         # NullPtr, DivZero, Abort, DoubleFree, etc.
│       └── demo_multithread/   # Race condition, deadlock, atomic failure, etc.
├── test/               # Test suite
└── docs/               # Documentation

Documentation

Topic Link
CLI Guide docs/cli/CLI_GUIDE.md
CLI Commands Reference docs/cli/CLI_COMMANDS_REFERENCE.md
Daemon Server Guide docs/cli/DAEMON_SERVER_GUIDE.md
PyPI Release Scripts docs/scripts/PYPI_RELEASE_SCRIPTS.md
System Architecture docs/architecture/README.md
Architecture Diagram docs/architecture/ARCHITECTURE_DIAGRAM.md
Tool System Overview docs/tools/tool_system/TOOL_SYSTEM_OVERVIEW.md
Tool Extension Guide docs/tools/tool_system/TOOL_SYSTEM_EXTENSION.md
Skill System docs/skills/SKILLS.md
RAG Vector Database docs/rag/README.md
Crash Demos docs/crash_demos/README.md

Testing

# Regression tests
python3 test/tool_system/test_regression.py

# LLM connection test
python3 test/llm/test_llm_connection.py --provider openai

# Code content provider test
python3 test/agent_py_tool/test_code_content_provider.py

# Vector database test
python3 test/agent_py_tool/test_vector_db.py

FAQ

Q: Symbolization failed? Ensure --library-dir contains the binary files (.dylib / .so) along with their debug symbols (.dSYM directories or DWARF info).

Q: LLM call failed? Verify your API key is set correctly. Quick check: python3 test/llm/test_llm_connection.py --provider openai

Q: Code context extraction returns empty? Ensure --code-root points to the source directory that contains the files listed in the symbolized stack trace.

Q: Can I use this without an LLM key? Yes. Use --skip-ai to run the full toolchain (parse + symbolize + extract code). The structured JSON output is useful on its own for triage and debugging.

Contributing

Contributions are welcome! Please read CONTRIBUTING.md before submitting a PR.

# All commits require DCO sign-off
git commit -s -m "feat: describe your change"

License

Apache License 2.0

Contact

Channel Link
GitHub Issues Report a bug or request a feature
Email hong9988.dev@gmail.com

Maintainer:

Name GitHub Email
liuhong @liuhong996 hong9988.dev@gmail.com

If this project helps you, please consider giving it a Star!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stability_analysis_agent-1.1.0.tar.gz (789.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stability_analysis_agent-1.1.0-py3-none-any.whl (250.1 kB view details)

Uploaded Python 3

File details

Details for the file stability_analysis_agent-1.1.0.tar.gz.

File metadata

  • Download URL: stability_analysis_agent-1.1.0.tar.gz
  • Upload date:
  • Size: 789.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for stability_analysis_agent-1.1.0.tar.gz
Algorithm Hash digest
SHA256 12790a8ac6209ead186b75b06bc4c49cf632070a762763bbfe2eec718d48f566
MD5 18254efc6b3da859cf10963d9730e776
BLAKE2b-256 7004cdb3125439817a328db24e5561278e4fe68cb10badf5c76234ef613e9e0d

See more details on using hashes here.

File details

Details for the file stability_analysis_agent-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for stability_analysis_agent-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 91ff85da17f9f996680883abcd47495b02599fe03bee5d72a7d06f572150f276
MD5 e523b526e05200958d7dda0091593832
BLAKE2b-256 27c4ef584e822ac2e936214c805096a84fcc4d021f0546422016147f2606dcc0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page