Skip to main content

Stability Analysis Agent: AI-assisted crash log analysis toolchain (CLI, daemon, analyzers, agent)

Project description

Stability Analysis Agent

An AI Agent for App Stability — from crash log to root cause in one step
Crash · ANR · OOM · Freeze analysis | addr2line / atos symbolizer | LangGraph AI Agent | RAG knowledge base

PyPI License Python PRs Welcome

English | 简体中文


Stability Analysis Agent is an open-source AI Agent purpose-built for app stability analysis — covering crashes, ANR (Application Not Responding), OOM (Out of Memory), freezes / watchdog kills, and more. Feed it a stability log, and it will parse, symbolize, extract code, reason about the root cause, and generate fix suggestions — automatically. Supports iOS, Android, macOS, Linux, and Windows with built-in addr2line / atos integration, LangGraph multi-turn reasoning, and a RAG knowledge base (ChromaDB).

Why not just paste the log into an AI coding tool?

General-purpose AI coding tools (Cursor, Copilot, Claude Code, etc.) can read a crash log, but they hit hard limits on stability analysis:

  • Raw addresses are meaningless — AI tools cannot run addr2line / atos; they see 0x1a2b3c instead of MyClass::process() at main.cpp:42.
  • Stability logs are noisy — hundreds of system frames drown the real signal; without structured parsing, the LLM wastes tokens on irrelevant context.
  • No domain memory — every conversation starts from scratch; learned patterns (crash signatures, ANR deadlock traces, OOM heuristics) are lost.

This Agent solves all three:

AI Coding Tool Stability Analysis Agent
Address symbolization Cannot run native tools Built-in addr2line / atos integration
Log parsing Sees raw text, high noise Structured parser extracts signal, threads, key frames; classifies crash / ANR / OOM / freeze
Knowledge accumulation Stateless, starts from zero RAG: rule table + vector DB, patterns improve over time
Workflow Single-prompt, one-shot Multi-step Agent with conditional multi-turn reasoning
Extensibility Prompt-only Tool + Workflow plugin system, config-driven

Agent Engine

Three execution modes to fit different needs:

Mode Engine Best for
Direct One-shot prompt assembly Fast, simple, no framework dependency
LangChain LangChain Agent Flexible tool calling with chain-of-thought
LangGraph LangGraph state machine Multi-turn reasoning, the Agent can request more context and re-invoke tools

Select via --engine direct|langchain|langgraph. All modes share the same tool chain and RAG knowledge base.

No LLM API key required to run the core toolchain (parsing + symbolization + code extraction). Plug in any OpenAI-compatible model (GPT, DeepSeek, ERNIE, GLM, etc.) when you're ready for AI analysis.

Key Features

Feature Description
Multi-Step AI Agent LangGraph / LangChain / Direct — multi-turn reasoning with conditional branching
Address Symbolization Resolves raw addresses to function names & line numbers via addr2line / atos
Structured Log Parsing Auto-detects iOS / Android / macOS / Linux / Windows; classifies crash, ANR, OOM, freeze; extracts signal, threads, key frames
Source Code Context Extracts code snippets around crash points
RAG Knowledge Base Rule table (fast path) + vector retrieval (ChromaDB) with feedback loop
Tool + Workflow System Pluggable architecture — register custom tools and workflows via config or decorators
Multiple Interfaces CLI, HTTP Daemon (streaming / SSE), Python API

Architecture

                  ┌──────────┐   ┌──────────┐   ┌──────────┐
                  │   CLI    │   │  Daemon  │   │  Python  │
                  │          │   │  (HTTP)  │   │   API    │
                  └────┬─────┘   └────┬─────┘   └────┬─────┘
                       │              │              │
                       └──────────────┼──────────────┘
                                      │
                            ┌─────────▼─────────┐
                            │   Tool + Workflow │
                            │   (tool_system)   │
                            └─────────┬─────────┘
                                      │
          ┌───────────────────────────┼───────────────────────────┐
          │                           │                           │
          ▼                           ▼                           ▼
   ┌────────────┐            ┌────────────┐            ┌────────────┐
   │  Crash Log │            │  Address   │            │    Code    │
   │   Parser   │            │ Symbolizer │            │  Provider  │
   └────────────┘            └────────────┘            └────────────┘
                                      │
                            ┌─────────▼─────────┐
                            │    AI Agent       │
                            │  ┌─────────────┐  │
                            │  │  LangGraph  │  │
                            │  │  State      │  │
                            │  │  Machine    │  │
                            │  └──────┬──────┘  │
                            │         │         │
                            │    ┌────▼────┐    │
                            │    │   RAG   │    │
                            │    │ Rules + │    │
                            │    │ Vectors │    │
                            │    └────┬────┘    │
                            │         │         │
                            │    ┌────▼────┐    │
                            │    │   LLM   │    │
                            │    └─────────┘    │
                            └───────────────────┘

Agent Pipeline:

Crash Log → Parse → Symbolize → Extract Code
                                      ↓
                              RAG (rules + vectors)
                                      ↓
                                LLM Reasoning ←──→ Request More Context (multi-turn)
                                      ↓
                                 Fix Report

For detailed architecture diagrams, see docs/architecture.

Quick Start

Prerequisites

  • Binary usage: no Python runtime required
  • Source usage: Python 3.9+
  • (Optional) atos (macOS, built-in) or addr2line (Linux, via binutils) for symbolization

1. Install via PyPI (Recommended)

# Install (for Mainland China, add -i https://pypi.tuna.tsinghua.edu.cn/simple)
pip install stability-analysis-agent

# Verify installation
sa-agent --help

# One command onboarding (auto-check + guided setup + analysis)
sa-agent

When a previous analysis record exists, the interactive menu shows 5) Analyze recent log again for one-click rerun. The onboarding interaction is inspired by Claude-style CLI UX: arrow-key menus, grouped "More options", clear back actions, and post-action confirmation panels.

Config files are saved in ~/.config/stability-analysis-agent/:

  • agent_config.local.json — LLM provider / API key / model
  • add2line_resolver_config.local.json — addr2line / atos tool paths

Provider template is available at tools/configs/agent_config.local.example.json.
Besides setting API keys/authorization, you must set llm_config.active_provider to the provider key you want to use (for example openai, deepseek, or zhipu_bigmodel). AI requests currently use OpenAI Chat Completions compatible format by default; for non-compatible providers, an adapter is required (config-only changes may not be sufficient).

Even without config initialization, you can run the full non-AI toolchain with --skip-ai.

The PyPI package includes full runtime dependencies (vector DB, tree-sitter, and LangGraph chain).

Upgrade with: pip install -U stability-analysis-agent

2. Use Prebuilt CLI Binary (No Python Required)

Download the latest binary from GitHub Releases, then run:

# Example for v1.2.1 macOS arm64 package
unzip StabilityAnalyzer-v1.2.1-mac-arm64.zip
cd output/cli_release/stability_analyzer_cli/v1.2.1-mac-arm64

chmod +x StabilityAnalyzer

# If macOS Gatekeeper blocks launch (unsigned binary)
xattr -d com.apple.quarantine StabilityAnalyzer

./StabilityAnalyzer --help

# Optional: install a stable command name into ~/.local/bin (also ships in release zips)
chmod +x install.sh
./install.sh
# then: sa-agent --help

3. Developer Setup (from Source)

git clone https://github.com/baidu-maps/stability-analysis-agent.git
cd stability-analysis-agent
pip install -e .

pip install -e . is intended for development workflows. It also exposes the sa-agent command locally.

4. Run the Built-in Demo (No API Key Needed)

After installing via PyPI (pip install stability-analysis-agent) or from source (pip install -e .), clone the repo to get the bundled demo cases, then run:

sa-agent \
  --crash-log examples/crash_cases/demo_basic/logs/mac/NullPtr_SIGSEGV_2026-04-08_10-43-08.crash \
  --library-dir examples/crash_cases/demo_basic/lib/mac \
  --code-root examples/crash_cases/demo_basic/code_dir \
  --skip-ai

Output is saved to ./cli_reports/<timestamp>/ (under your current working directory) with structured JSON reports.

5. Analyze Your Own Crash Log

sa-agent \
  --crash-log <your-crash-log> \
  --library-dir <path-to-libs-and-symbols> \
  --code-root <path-to-source-code>

Add --skip-ai to skip AI analysis, or --parse-only to only parse + symbolize.

CLI Options

Flag Required Description
--crash-log Yes Path to the crash log file
--library-dir Yes* Directory with libraries (.dylib/.so) and debug symbols (.dSYM)
--code-root No Source code root for extracting code context
--skip-ai No Skip AI — run toolchain only (parser + resolver + code provider)
--parse-only No Parse + symbolize only (no --code-root needed)
--parse-log-only No Parse crash log only (no --library-dir needed)
--daemon <url> No Delegate to a running daemon instance

* Not required when using --parse-log-only.

Daemon Mode

The daemon provides streaming output (SSE), process reuse (no cold start), and task cancellation — ideal for IDE integration and high-frequency analysis:

# Start the daemon
sa-agent --daemon-server --host 127.0.0.1 --port 8765

# Analyze via daemon
sa-agent --daemon http://127.0.0.1:8765 \
  --crash-log <crash-log> --library-dir <lib-dir> --code-root <code-root>

See Daemon Server Guide for the full HTTP API reference.

Python API

from tool_system import (
    ToolAndWorkflowRegistry, SystemConfig, WorkflowConfig,
    ConfigDrivenExecutor, register_all_tools_and_workflows
)

registry = ToolAndWorkflowRegistry()
register_all_tools_and_workflows(registry)

config = SystemConfig(
    workflows=[WorkflowConfig(name="crash_analysis", enabled=True)]
)
executor = ConfigDrivenExecutor(registry, config, llm_adapter=None)

result = executor.execute_workflow("crash_analysis", {
    "crash_log": open("crash.crash").read(),
    "library_dir": "./lib",
    "code_root": "./code"
})
print(result)

LLM and Tool Configuration

AI analysis is optional. You can still run full non-AI toolchain with --skip-ai without any initialization.

For AI analysis and add2line customization after PyPI install, run:

sa-agent

Default local config directory:

~/.config/stability-analysis-agent/
  • agent_config.local.json for LLM provider/key/model
  • add2line_resolver_config.local.json for addr2line/atos tool paths

If you prefer manual editing, edit these files directly in that directory.

Advanced: add2line config override

You can override add2line config file location via environment variable:

export STABILITY_AGENT_ADD2LINE_CONFIG_FILE="/abs/path/add2line_resolver_config.local.json"

Project Structure

stability-analysis-agent/
├── agent/              # AI Agent engine (LangGraph state machine)
├── cli/                # CLI entry point
├── daemon/             # HTTP daemon (streaming, SSE)
├── tools/              # Tool implementations (parser, resolver, code provider)
│   └── configs/        # Configuration templates
├── tool_system/        # Tool + Workflow registration & dispatch framework
├── workflows/          # Workflow definitions (crash analysis)
├── rag/                # RAG: rule store + vector index (ChromaDB) + metadata
├── prompts/            # Prompt templates for LLM analysis
├── protocol/           # Unified request/response protocol
├── examples/           # Bundled crash cases
│   └── crash_cases/
│       ├── demo_basic/         # NullPtr, DivZero, Abort, DoubleFree, etc.
│       └── demo_multithread/   # Race condition, deadlock, atomic failure, etc.
├── test/               # Test suite
└── docs/               # Documentation

Documentation

Topic Link
CLI Guide docs/cli/CLI_GUIDE.md
CLI Commands Reference docs/cli/CLI_COMMANDS_REFERENCE.md
Daemon Server Guide docs/cli/DAEMON_SERVER_GUIDE.md
PyPI Release Scripts docs/scripts/PYPI_RELEASE_SCRIPTS.md
System Architecture docs/architecture/README.md
Architecture Diagram docs/architecture/ARCHITECTURE_DIAGRAM.md
Tool System Overview docs/tools/tool_system/TOOL_SYSTEM_OVERVIEW.md
Tool Extension Guide docs/tools/tool_system/TOOL_SYSTEM_EXTENSION.md
Workflow System docs/workflows/WORKFLOWS.md
RAG Vector Database docs/rag/README.md
Crash Demos docs/crash_demos/README.md

Testing

# Regression tests
python3 test/tool_system/test_regression.py

# LLM connection test
python3 test/llm/test_llm_connection.py --provider openai

# Code content provider test
python3 test/agent_py_tool/test_code_content_provider.py

# Vector database test
python3 test/agent_py_tool/test_vector_db.py

FAQ

Q: Symbolization failed? Ensure --library-dir contains the binary files (.dylib / .so) along with their debug symbols (.dSYM directories or DWARF info).

Q: LLM call failed? Verify your API key is set correctly. Quick check: python3 test/llm/test_llm_connection.py --provider openai

Q: Code context extraction returns empty? Ensure --code-root points to the source directory that contains the files listed in the symbolized stack trace.

Q: Can I use this without an LLM key? Yes. Use --skip-ai to run the full toolchain (parse + symbolize + extract code). The structured JSON output is useful on its own for triage and debugging.

Contributing

Contributions are welcome! Please read CONTRIBUTING.md before submitting a PR.

# All commits require DCO sign-off
git commit -s -m "feat: describe your change"

License

Apache License 2.0

Contact

Channel Link
GitHub Issues Report a bug or request a feature
Email hong9988.dev@gmail.com

Maintainer:

Name GitHub Email
liuhong @liuhong996 hong9988.dev@gmail.com

If this project helps you, please consider giving it a Star!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stability_analysis_agent-1.2.1.tar.gz (811.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stability_analysis_agent-1.2.1-py3-none-any.whl (266.7 kB view details)

Uploaded Python 3

File details

Details for the file stability_analysis_agent-1.2.1.tar.gz.

File metadata

  • Download URL: stability_analysis_agent-1.2.1.tar.gz
  • Upload date:
  • Size: 811.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for stability_analysis_agent-1.2.1.tar.gz
Algorithm Hash digest
SHA256 aec078144337a18af1d6e586710f81f41d3cea32bce7b55a2a3f4d0914f67fa9
MD5 4f3fbab1c8f4d5bea50466b5183f7359
BLAKE2b-256 835e11c4227b56b295995102add21a594af4de74cb8f55ac15847d49d10d859c

See more details on using hashes here.

File details

Details for the file stability_analysis_agent-1.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for stability_analysis_agent-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f24cbc26a43e5f8d0bba338ba16af6ff624f68603713ee534efaca858c86a98c
MD5 be437269be083255ee2daabb2b9fd03c
BLAKE2b-256 d236efbd088138c514a439babf2878eb619a147ad5e00983fcba5ae951fa559e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page