axe, yerrrr
Project description
axe
The agent built for real codebases.
While other coding tools like claude code burn tokens on bloat to charge you more, axe gives you surgical precision. Built for large-scale projects, battle-tested internally for 6 months, and powered by the world's most advanced code retrieval engine.
What is axe?
axe is an agent that runs in your terminal, helping you ship production code faster. It reads and edits code, executes shell commands, searches the web, and autonomously plans multi-step workflows—all while using 95% fewer tokens than tools that dump your entire codebase into context.
Built for:
- Real engineers working on production systems with 100K+ line codebases
- Precision refactoring where you need to understand call graphs before changing a function
- Debugging that requires tracing data flow, not just reading error messages
- Architecture exploration in unfamiliar codebases where grep won't cut it
Hit Ctrl+X to toggle between axe and your normal shell. No context switching. No juggling terminals.
Why axe exists
The problem: Other tools dump your entire codebase into context, charging you for irrelevant noise. They're built for vibe coding—one-shot weekend projects where "good enough" is the goal.
The reality: Real engineering happens in 100K+ line codebases where precision matters. You need to understand execution flow, trace bugs through call graphs, and refactor without breaking half your tests. You can't afford to burn 200K tokens reading files that don't matter.
The solution: axe combines intelligent agents with axe-dig, our 5-layer code intelligence engine that extracts meaning instead of dumping text.
The axe-dig Advantage
Stop burning context windows. Start shipping features.
Your codebase is 100,000 lines. Claude can read ~200,000 tokens. Math says you're already in trouble.
| Approach | Tokens Used | What You Get |
|---|---|---|
| Read raw files | 23,314 | Full code, zero context window left |
| Grep results | ~5,000 | File paths. No understanding. |
| axe-dig | 1,189 | Structure + call graph + complexity—everything needed to edit correctly |
95% token savings while preserving the information LLMs actually need to write correct code.
How axe-dig Works: we dig 5 levels deep.
Not every question needs full program analysis. Pick the layer that matches your task:
┌─────────────────────────────────────────────────────────────┐
│ Layer 5: Program Dependence → "What affects line 42?" │
│ Layer 4: Data Flow → "Where does this value go?" │
│ Layer 3: Control Flow → "How complex is this?" │
│ Layer 2: Call Graph → "Who calls this function?" │
│ Layer 1: AST → "What functions exist?" │
└─────────────────────────────────────────────────────────────┘
Try it yourself on this codebase:
# first run this if you didn't run the "axe" command prior to it. axe automicatlly makes the .dig folder with all the indexes, edges-- but since its your first time, you can run this first
chop semantic index .
# 1. Find code that resets counters (semantic search)
chop semantic search "reset cumulative statistics and start fresh counter"
# Result: Finds reset_step_count() at position #2 (score: 0.632)
# Why this query? We're looking for state reset logic
# What it found: TokenCounter.reset_step_count() - even though the code
# doesn't mention "cumulative" or "fresh", the embedding understands
# it resets a TokenCount object in a statistics tracking class
# 2. Get token-efficient context
chop context reset_step_count --project src/axe_cli/
# Result: ~89 tokens (vs ~4,200 for reading the raw file)
# Shows: Function signature, what it calls, complexity metrics
# 98% token savings while preserving understanding
# 3. Check who uses it before refactoring
chop impact TokenCounter src/axe_cli/
# Result: Only called by get_global_counter() in same file
# Meaning: Safe to refactor - no external dependencies to break
What this demonstrates:
- Semantic search finds code by behavior, not keywords
- Context extraction gives you understanding at 2% of the token cost
- Impact analysis shows dependencies instantly (no grep, no manual tracing)
Semantic Search: Find Code by Behavior
Traditional search finds syntax. axe-dig semantic search finds what code does based on call graphs and structure.
# Try this on the axe-cli codebase itself:
chop semantic search "retry failed operations with exponential backoff"
# Result: Finds _is_retryable_error() at position #1 (score: 0.713)
# Why? The query doesn't mention "error" or specific function names
# But the embedding understands retry logic patterns:
# - Function checks exception types (retryable vs non-retryable)
# - Called by retry loops with backoff logic
# - Part of error handling flow in axesoul.py
What it found:
[
{
"name": "_is_retryable_error",
"file": "src/axe_cli/soul/axesoul.py",
"score": 0.713
},
{
"name": "_retry_log",
"file": "src/axe_cli/soul/axesoul.py",
"score": 0.710
}
]
Another example: Find config loading
chop semantic search "load configuration from toml file"
# Result: load_config_from_string() at #1 (score: 0.759)
# Finds TOML parsing, config migration, and related tests
Every function gets embedded with:
- Signature + docstring
- What it calls + who calls it (forward & backward call graph)
- Complexity metrics (branches, loops, cyclomatic complexity)
- Data flow (which variables are used, how they transform)
- Dependencies (imports, external modules)
- First ~10 lines of code
This gets encoded into 1024-dimensional embeddings, so semantic search finds relevant code even when you use different terminology.
Daemon Architecture: 300x Faster
The old way: Every query spawns a new process, parses the entire codebase, throws away the results. ~30 seconds per query.
axe-dig's daemon: Long-running background process with indexes in RAM. ~100ms per query.
# First query auto-starts daemon (transparent)
axe # In your project directory
# Daemon stays running, queries use in-memory indexes
# 100ms, not 30s per query
Incremental updates: When you edit one function, axe-dig doesn't re-analyze the entire codebase. Content-hash-based caching with automatic dependency tracking means 10x faster incremental updates.
What's stored: The daemon keeps call graphs, complexity metrics, and semantic embeddings in .dig/cache/. A typical project generates ~10MB of indexes that load into RAM in \u003c1 second. See full cache structure for details.
Read the full axe-dig documentation →
Documentation Index
We've organized the docs to make them digestible. Here's what's where:
Common Use Cases & Workflows
Learn how to use axe for implementing features, fixing bugs, understanding unfamiliar code, and automating tasks. Includes real workflow examples for debugging, refactoring, and exploration. See how axe handles everything from adding pagination to investigating race conditions.
Built-in Tools
Complete reference for all available tools: file operations, shell commands, multi-agent tasks, and the axe-dig code intelligence tools. CodeSearch finds code by behavior, CodeContext extracts LLM-ready summaries with 95% token savings, CodeStructure navigates files/directories, and CodeImpact shows reverse call graphs before you refactor. Every tool is designed for precision, not guesswork.
Agent Skills
How to create and use specialized skills to extend axe's capabilities. Skills are reusable workflows and domain expertise that you can invoke with /skill:name commands. Includes flow skills for multi-step automated workflows and examples for code style, git commits, and project standards. Turn your team's best practices into executable knowledge.
Agents & Subagents
Guide to creating custom agents and spawning specialized subagents for parallel work. Need a dedicated security researcher? A ruthlessly precise code reviewer? A creative copywriter? axe can create and deploy specialized subagents based on your exact requirements. These subagents operate with lethal precision to divide and conquer complex workflows.
Technical Reference
Deep dive into configuration (providers, models, loop control), session management, architecture, and MCP integration. Everything you need to customize axe for your workflow. Configure Bodega models, set up OpenRouter/Anthropic/OpenAI providers, manage sessions, and integrate with other tools via Model Context Protocol.
axe-dig: Code Intelligence Engine
The secret weapon. Complete documentation on axe-dig's 5-layer architecture, semantic search, daemon mode, and program slicing. Learn how to extract 95% fewer tokens while preserving everything needed for correct edits. Includes performance benchmarks (155x faster queries, 89% token reduction), real-world debugging workflows, and the design rationale behind every choice. This is what makes axe different from every other coding tool.
Quick start
Install
# Install axe-cli (includes axe-dig)
uv pip install axe-cli
# Or from source
git clone https://github.com/SRSWTI/axe-cli
cd axe-cli
make prepare
make build
or uv run axe
Run
cd /path/to/your/project
axe
On first run, axe-dig automatically indexes your codebase (30-60 seconds for typical projects). After that, queries are instant.
Start using
# Find code by behavior
/skill:code-search "database connection pooling"
# Understand a function without reading the whole file
/skill:code-context get_user_by_id
# See who calls a function before refactoring
/skill:code-impact authenticate_request
# Make surgical edits
StrReplaceFile src/auth.py "old code" "new code"
# Toggle to shell mode
[Ctrl+X]
pytest tests/
[Ctrl+X]
Core capabilities
Code intelligence (powered by axe-dig)
| Tool | What it does | Use case |
|---|---|---|
| CodeSearch | Semantic search by behavior | "Find payment processing logic" |
| CodeContext | LLM-ready function summaries (95% token savings) | Understand unfamiliar code |
| CodeStructure | Navigate functions/classes in files/dirs | Explore new codebases |
| CodeImpact | Reverse call graph (who calls this?) | Safe refactoring |
File operations
ReadFile/WriteFile/StrReplaceFile- Standard file I/OGrep- Exact file locations + line numbers (use after CodeSearch)Glob- Pattern matchingReadMediaFile- Images, PDFs, videos
Multi-agent workflows
Task- Spawn subagents for parallel workCreateSubagent- Custom agent specsSetTodoList- Track multi-step tasks
Shell integration
Shell- Execute commands- Ctrl+X - Toggle between axe and normal shell mode
Powered by SRSWTI Inc.
Building the world's fastest retrieval and inference engines.
Bodega Inference Engine
Exclusive models trained/optimized for Bodega Inference Engine. axe includes zero-day support for all Bodega models (ofcourse), ensuring immediate access to our latest breakthroughs.
Note: Our models are also available on 🤗 Hugging Face.
Raptor Series
Ultra-compact reasoning models designed for efficiency and edge deployment. Super light, amazing agentic coding capabilities, robust tool support, minimal memory footprint.
- 🤗 bodega-raptor-0.9b - 900M params. Runs on mobile/Pi with 100+ tok/s.
- 🤗 bodega-raptor-90m - Extreme edge variant. Sub-100M params for amazing tool calling.
- 🤗 bodega-raptor-1b-reasoning-opus4.5-distill - Distilled from Claude Opus 4.5 reasoning patterns.
- 🤗 bodega-raptor-8b-mxfp4 - Balanced power/performance for laptops.
- 🤗 bodega-raptor-15b-6bit - Enhanced raptor variant.
Flagship Models
Frontier intelligence, distilled and optimized.
- 🤗 deepseek-v3.2-speciale-distilled-raptor-32b-4bit - DeepSeek V3.2 distilled to 32B with Raptor reasoning. Exceptional math/code generation in 5-7GB footprint. 120 tok/s on M1 Max.
- 🤗 bodega-centenario-21b-mxfp4 - Production workhorse. 21B params optimized for sustained inference workloads.
- 🤗 bodega-solomon-9b - Multimodal and best for agentic coding.
Axe-Turbo Series
Launched specifically for the Axe coding use case. High-performance agentic coding models optimized for the Axe ecosystem.
- 🤗 axe-turbo-1b - 1B params, 150 tok/s, sub-50ms first token. Edge-first architecture.
- 🤗 axe-turbo-31b - High-capacity workloads. Exceptional agentic capabilities.
Specialized Models
Task-specific optimization.
- 🤗 bodega-vertex-4b - 4B params. Optimized for structured data.
- 🤗 blackbird-she-doesnt-refuse-21b - Uncensored 21B variant for unrestricted generation.
Using Bodega Models
Configure Bodega in ~/.axe/config.toml:
default_model = "bodega-raptor"
[providers.bodega]
type = "bodega"
base_url = "http://localhost:44468" # Local Bodega server
api_key = ""
[models.bodega-raptor]
provider = "bodega"
model = "srswti/bodega-raptor-8b-mxfp4"
max_context_size = 32768
capabilities = ["thinking"]
[models.bodega-turbo]
provider = "bodega"
model = "srswti/axe-turbo-31b"
max_context_size = 32768
capabilities = ["thinking"]
See sample_config.toml for more examples including OpenRouter, Anthropic, and OpenAI configurations.
What's coming
Our internal team has been using features that will change the game:
1. Execution Tracing
See what actually happened at runtime. No more guessing why a test failed.
# Trace a failing test
/trace pytest tests/test_payment.py::test_refund
# Shows exact values that flowed through each function:
# process_refund(amount=Decimal("50.00"), transaction_id="tx_123")
# → validate_refund(transaction=Transaction(status="completed"))
# → check_refund_window(created_at=datetime(2024, 1, 15))
# → datetime.now() - created_at = timedelta(days=45)
# → raised RefundWindowExpired # ← 30-day window exceeded
2. Performance Debugging
Flame graphs and memory profiling integrated directly in the chat interface.
# Generate flame graph
/flamegraph api_server.py
# Find memory leaks
/memory-profile background_worker.py
3. Visual Debugging
Interactive visualizations for understanding complex codebases:
- Call graphs: See the entire call chain from entry point to implementation
- Dependency graphs: Understand module relationships and coupling
- AST visualizations: Navigate code structure visually
- Data flow diagrams: Trace how values transform through your system
All generated on demand and viewable in your browser. No more drawing diagrams on whiteboards—axe-dig generates them from your actual code.
4. Smart Test Selection
# Only run tests affected by your changes
/test-impact src/payment/processor.py
# Shows: 12 tests need to run (not all 1,847)
Why we built this
We're building the world's best retrieval and inference engine. We started with coding because it's the hardest problem: understanding large codebases, tracing execution, debugging logic errors, optimizing performance.
If we can nail code understanding, we can nail anything.
This is not for vibe coders. This is not for weekend hackathons where "it works on my machine" is good enough. This is for engineers shipping production code to real users, where bugs cost money and downtime costs more.
Other tools optimize for demo videos and charging per token. We optimize for engineers who need to:
- Refactor 10,000 lines without breaking tests
- Debug race conditions in distributed systems
- Understand legacy codebases with zero documentation
- Ship features on deadline without cutting corners
The bottom line: If you're building real software in large codebases, you need precision tools. Not vibe coding toys.
Welcome to axe.
Supported languages
Python, TypeScript, JavaScript, Go, Rust, Java, C, C++, Ruby, PHP, C#, Kotlin, Scala, Swift, Lua, Elixir
Language auto-detected. Specify with --lang if needed.
Comparison
| Feature | Claude Code | OpenAI Codex | axe |
|---|---|---|---|
| Built for | Weekend projects | Demos | Production codebases |
| Context strategy | Dump everything | Dump everything | Extract signal (95% savings) |
| Code search | Text/regex | Text/regex | Semantic (behavior-based) |
| Call graph analysis | ❌ | ❌ | ✅ 5-layer analysis |
| Token optimization | ❌ (incentivized to waste) | ❌ (incentivized to waste) | ✅ Show savings per query |
| Execution tracing | ❌ | ❌ | ✅ Coming soon |
| Flame graphs | ❌ | ❌ | ✅ Coming soon |
| Memory profiling | ❌ | ❌ | ✅ Coming soon |
| Visual debugging | ❌ | ❌ | ✅ Coming soon |
| Shell integration | ❌ | ❌ | ✅ Ctrl+X toggle |
| Session management | Limited | Limited | ✅ Full history + replay |
| Skills system | ❌ | ❌ | ✅ Modular, extensible |
| Subagents | ❌ | ❌ | ✅ Parallel task execution |
| Battle-tested | Public beta | Public API | 6 months internal use |
Community
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Docs: Full documentation
Acknowledgements
Special thanks to MoonshotAI/kimi-cli for their amazing work, which inspired our tools and the Kosong provider.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file axe_cli-1.8.1.tar.gz.
File metadata
- Download URL: axe_cli-1.8.1.tar.gz
- Upload date:
- Size: 204.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a06954016b29f9012d78f5a0059a684670624afaac8b83fc1195a4bed43e3383
|
|
| MD5 |
cd0ff555c208eadd4bc69204627de0c9
|
|
| BLAKE2b-256 |
d032c64aa8e622d42a59efd3f32d71b373293e6556ce028bf5a175b09c2360a6
|
File details
Details for the file axe_cli-1.8.1-py3-none-any.whl.
File metadata
- Download URL: axe_cli-1.8.1-py3-none-any.whl
- Upload date:
- Size: 270.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b1e8b66b120b74903ea8445f0bc3d6672e37a8a064e7d547f11ca2fdd47d89f
|
|
| MD5 |
0ecc844d0ae6b3ed0a0e64ee5a64a455
|
|
| BLAKE2b-256 |
fbb8543fe9ca3d3e1c192926ca387b893d0136c4341f3670a933e4cde276084b
|