Skip to main content

A self-evolving AI agent that builds its own skills

Project description

alive

AI that teaches itself new abilities.

QuickstartHow It WorksExamplesDream LogSharingConfig

PyPI License: MIT Python 3.11+


Most AI agents come with a fixed set of tools. When they hit something they can't do, they stop and tell you.

Alive doesn't stop. It writes the code, tests it, and teaches itself the new ability. Next time you ask, it already knows how.

Alive building a skill from scratch

Alive encounters "scrape this webpage for prices", realizes it can't do that yet, writes a web scraper skill, tests it in a sandbox, and runs it — all in one request.


What makes Alive different

Alive OpenClaw Yunjue Agent EvoAgentX
Learns new skills at runtime Yes — writes, tests, and saves new code No — fixed toolkit No — fixed toolkit Evolves prompts, not code
Skills persist across sessions Yes — ~/.alive/skills/ No No No
Grows uniquely per user Yes — shaped by YOUR requests Same for everyone Same for everyone Same for everyone
Self-reflection Dream log analyzes growth patterns No Benchmarks only No
Skill sharing Export/import .tar.gz or GitHub Gist N/A N/A N/A
Runs locally Yes — Ollama support, zero API keys needed Cloud only Cloud only Cloud only
Subprocess sandbox 60s timeout, isolated venv In-process In-process N/A

Alive is a personal agent. Two people using Alive for a month will have completely different skill sets. It's not a framework — it's a creature that grows.


Quickstart

pip install alive-agent
alive init

That's it. Alive creates ~/.alive/, sets up a Python venv, installs three seed skills (web fetch, file reader, shell commands), and auto-detects your LLM provider.

Now ask it something:

alive run "get the current bitcoin price from coindesk"

If Alive has a skill for it, it runs it. If not, it builds one, tests it, saves it, and runs it. Next time you ask something similar, it's instant.

Choose your LLM

Alive works with whatever you've got:

# Claude (default if ANTHROPIC_API_KEY is set)
alive config --provider claude --claude-key sk-ant-...

# OpenAI
alive config --provider openai --openai-key sk-...

# DeepSeek
alive config --provider deepseek --deepseek-key sk-...

# Ollama (local, no API key needed)
alive config --provider ollama

No key set? Alive auto-detects from environment variables. No environment variables? Falls back to Ollama. Zero configuration required.


How it works

Every request goes through a 5-step loop:

    ┌─────────────────────────────────────────────────────────┐
    │                     alive run "..."                      │
    └──────────────────────────┬──────────────────────────────┘
                               │
                    ┌──────────▼──────────┐
                    │     1. PLAN          │
                    │  Decompose request   │
                    │  into steps          │
                    └──────────┬──────────┘
                               │
              ┌────────────────┼────────────────┐
              ▼                ▼                 ▼
     ┌────────────────┐ ┌───────────────┐ ┌────────────┐
     │  EXISTING_SKILL │ │   NEW_SKILL   │ │  LLM_ONLY  │
     │  Run known skill│ │  Build & test │ │  Text gen   │
     └────────┬───────┘ └───────┬───────┘ └──────┬─────┘
              │                 │                  │
              │          ┌──────▼──────┐          │
              │          │  2. BUILD    │          │
              │          │  Write code  │          │
              │          │  3. TEST     │          │
              │          │  Sandbox run │          │
              │          │  4. RETRY    │          │
              │          │  Fix & retry │          │
              │          │  (up to 5x)  │          │
              │          │  5. SAVE     │          │
              │          │  Register    │          │
              │          └──────┬──────┘          │
              │                 │                  │
              └────────────────┬──────────────────┘
                               │
                    ┌──────────▼──────────┐
                    │   COMPOSE & OUTPUT   │
                    │   Chain results,     │
                    │   synthesize answer   │
                    └─────────────────────┘

Step 1 — Plan. The LLM decomposes your request into steps. Each step is either: use an existing skill, build a new one, or just generate text.

Step 2 — Build. For new skills, the LLM writes a Python function with type hints, error handling, and tests. All generated code follows a strict template: a run() function that returns a dict.

Step 3 — Test. The code runs in an isolated subprocess with a 60-second timeout. No eval(), no in-process execution. Dependencies install into a shared venv at ~/.alive/venv/.

Step 4 — Retry. If tests fail, Alive sends the error back to the LLM for fixing. Up to 5 attempts. After attempt 3, it tries a fundamentally different approach.

Step 5 — Save. Working skills are saved to ~/.alive/skills/<name>/ with code, tests, docs, and metadata. Confidence scores update with every use via Wilson score lower bound.


Example session

$ alive run "generate 10 random passwords, each 16 characters with symbols"
Execution Plan for: generate 10 random passwords...
├── ① NEW_SKILL: Build password generator (needs: "generate random passwords...")
└── ② LLM_ONLY: Format the output nicely

╭──── Self-Extension Engine ────╮
│ Building skill: generate      │
│ random passwords with symbols │
│ Provider: claude              │
╰───────────────────────────────╯

  Attempt 1/5
  ✓ Tests passed.

╭──── Skill Built Successfully ────╮
│ password_generator               │
│                                  │
│ Generate random passwords        │
│ Dependencies: none               │
│ Attempts: 1                      │
│ Location: ~/.alive/skills/       │
│           password_generator/    │
╰──────────────────────────────────╯

  ✓ Step 1/2: Build password generator
  ✓ Step 2/2: Format the output nicely

╭──────────────── alive ─────────────────╮
│ Here are 10 random passwords:          │
│                                        │
│  1. kQ#8mP!xR2nL@5vW                  │
│  2. Yj$4hS&9Bw*3Tz!e                  │
│  3. ...                                │
╰────────────────────────────────────────╯

Now that skill exists forever. Try something that builds on it:

$ alive run "read my .env file and check if any passwords are weak"
Execution Plan for: read .env and check passwords
├── ① EXISTING_SKILL file_reader: Read the .env file
├── ② NEW_SKILL: Analyze password strength
└── ③ LLM_ONLY: Summarize findings

  ✓ Step 1/3: Read .env file
  ✓ Step 2/3: Analyze password strength  (built: password_strength_checker)
  ✓ Step 3/3: Summarize findings

Two skills built from two requests. Alive now knows how to generate passwords AND check their strength.

$ alive skills
              Installed Skills
┌──────────────────────┬──────┬──────┬───────────┬──────────────────────┐
│ Name                 │  Conf│ Used │ Last Used │ Description          │
├──────────────────────┼──────┼──────┼───────────┼──────────────────────┤
│ password_generator   │  95% │    3 │ 2026-03-18│ Generate random...   │
│ strength_checker     │  80% │    1 │ 2026-03-18│ Analyze password...  │
│ web_fetch            │  50% │    0 │ never     │ Fetch URL content    │
│ file_reader          │  50% │    0 │ never     │ Read file contents   │
│ shell_run            │  50% │    0 │ never     │ Run shell commands   │
└──────────────────────┴──────┴──────┴───────────┴──────────────────────┘

Dry run

Want to see the plan before committing?

$ alive run "scrape hacker news and summarize the top 5 stories" --dry-run
Execution Plan for: scrape hacker news and summarize
├── ① EXISTING_SKILL web_fetch: Fetch the HN front page
├── ② NEW_SKILL: Parse HTML to extract story titles and URLs
└── ③ LLM_ONLY: Summarize the top 5 stories

Run without --dry-run to execute this plan.

Dream Log

Alive reflects on itself. Run alive dream and it analyzes its own growth, what you ask about most, and what it should learn next.

$ alive dream
╭──────────────────── Dream Log ────────────────────╮
│                                                    │
│  ## How I've grown                                 │
│                                                    │
│  I now have 7 skills across 3 domains              │
│  (web, system, text). My password_generator        │
│  is my most reliable skill at 95%                  │
│  confidence after 12 executions.                   │
│                                                    │
│  ## What you ask me most about                     │
│                                                    │
│  Your requests cluster around two themes:          │
│  security tooling (passwords, env checks)          │
│  and web scraping. 6 of your last 10               │
│  requests involved fetching URLs.                  │
│                                                    │
│  ## Skills I want to build next                    │
│                                                    │
│  1. A CSV/JSON report generator — you keep         │
│     asking me to "summarize" things that           │
│     would be better as structured data.            │
│  2. A git commit analyzer — I noticed you          │
│     ask about code changes frequently.             │
│                                                    │
│  ## Skill breeding opportunities                   │
│                                                    │
│  web_fetch + html_parser could combine             │
│  into a full web_scraper skill with CSS            │
│  selector support.                                 │
│                                                    │
│  ## Something I noticed                            │
│                                                    │
│  You tend to work in bursts — 5 requests           │
│  in 20 minutes, then nothing for days.             │
│  Most of your requests come after 10pm.            │
│                                                    │
╰────────────────────────────────────────────────────╯

Saved to ~/.alive/dreams/2026-03-18.md

Dream logs are saved as dated markdown files. One per day. They're a journal of how your agent is evolving.


Skill sharing

Export a skill

$ alive share password_generator
╭──── Skill Exported ──────────────────────────╮
│ password_generator.alive-skill.tar.gz        │
│                                              │
│ Install command:                             │
│   alive install password_generator.alive-... │
╰──────────────────────────────────────────────╯

# Or publish to GitHub Gist
$ alive share password_generator --gist

Install someone else's skill

# From a file
$ alive install password_generator.alive-skill.tar.gz

# From a URL
$ alive install https://example.com/skills/csv_parser.alive-skill.tar.gz

# From a GitHub Gist
$ alive install https://gist.github.com/alice/abc123

Alive shows you the code before installing and asks for approval. No blind trust.

╭──── Skill Preview ────────────────────────────────╮
│ csv_parser — Parse CSV files into structured data  │
│ Dependencies: none                                 │
╰────────────────────────────────────────────────────╯

╭──── csv_parser/skill.py ────╮
│  1 │ """Parse CSV files."""  │
│  2 │ import csv             │
│  3 │ ...                    │
╰──────────────────────────────╯

Install this skill? [Y/n]:

Browse community skills

$ alive browse --search "data"
              Community Skills
┌──────────┬────────────────────┬────────┬──────┬─────────────────────┐
│ Name     │ Description        │ Author │ Tags │ Install             │
├──────────┼────────────────────┼────────┼──────┼─────────────────────┤
│ csv_parse│ Parse CSV files    │ alice  │ data │ alive install ...   │
│ json_flat│ Flatten nested JSON│ bob    │ data │ alive install ...   │
└──────────┴────────────────────┴────────┴──────┴─────────────────────┘

Configuration

Providers

Provider API Key Env Var Default Model Local?
claude ANTHROPIC_API_KEY claude-sonnet-4-20250514 No
openai OPENAI_API_KEY gpt-4o No
deepseek DEEPSEEK_API_KEY deepseek-chat No
ollama (none needed) llama3.1 Yes

Resolution order: --provider flag → ALIVE_PROVIDER env var → config.json → auto-detect from API keys → Ollama fallback.

# Save config persistently
alive config --provider claude --claude-key sk-ant-...

# Or use environment variables
export ANTHROPIC_API_KEY=sk-ant-...
export ALIVE_PROVIDER=claude

# View current config (keys are masked)
alive config --show

Directory structure

~/.alive/
├── config.json           # Provider & API keys
├── alive.db              # SQLite — requests, gaps, stats
├── venv/                 # Shared Python venv for skill deps
├── skills/               # All installed skills
│   ├── web_fetch/        # Seed skill
│   │   ├── skill.py
│   │   ├── test_skill.py
│   │   ├── metadata.json
│   │   └── SKILL.md
│   ├── password_generator/  # User-built skill
│   │   └── ...
│   └── ...
└── dreams/               # Dream log entries
    └── 2026-03-18.md

All CLI commands

Command Description
alive run "..." Process a natural language request
alive build "..." Build a new skill from description
alive plan "..." Show execution plan without running
alive skills List installed skills
alive status Show capabilities, gaps, and stats
alive dream Generate self-reflection log
alive share <name> Export a skill as .tar.gz
alive install <source> Import a skill from file/URL/gist
alive browse Browse community skills
alive config Set provider and API keys
alive init First-time setup

Useful flags

alive run "..." --dry-run          # See the plan without executing
alive run "..." --simple           # Skip planning, direct LLM call
alive run "..." --verbose          # Show full errors and LLM details
alive run "..." --provider ollama  # Override provider for one command
alive build "..." --verbose        # See generated code during build

Contributing

Alive is early-stage and contributions are very welcome.

# Clone and set up
git clone https://github.com/alive-agent/alive.git
cd alive
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"

# Run tests (274 of them)
pytest tests/ -v

# Lint
ruff check src/ tests/

Where to contribute

  • New seed skills — add to src/alive/seeds/. Good candidates: JSON parser, regex matcher, image downloader.
  • Provider support — add new LLM providers in src/alive/llm/. Follow the OllamaProvider pattern.
  • Skill sharing — the community catalog at alive-agent/community-skills needs skills!
  • Bug fixes — check issues.

Architecture for contributors

src/alive/
├── cli.py              # Typer CLI — all commands
├── config.py           # Paths and directory setup
├── core/
│   ├── planner.py      # Request → ExecutionPlan
│   ├── composer.py     # ExecutionPlan → Result
│   ├── skill_builder.py # Description → Working skill
│   ├── skill_registry.py # CRUD for skills + Wilson scoring
│   └── self_model.py   # SQLite tracking + capability analysis
├── llm/
│   ├── base.py         # Abstract LLMProvider
│   ├── registry.py     # Provider factory + auto-detection
│   └── *_provider.py   # Claude, OpenAI, Ollama, DeepSeek
├── sandbox/
│   ├── runner.py       # Subprocess execution
│   └── dependency.py   # Shared venv management
├── dream/
│   └── dream_log.py    # Self-reflection engine
├── sharing/
│   ├── export.py       # .tar.gz + Gist publishing
│   ├── import_skill.py # Import with security checks
│   └── browse.py       # Community catalog
└── seeds/              # Bundled starter skills

Roadmap

What's coming next:

  • Skill breeding — Alive notices two skills that combine well and proposes a hybrid. web_fetch + html_parser = web_scraper with CSS selectors.

  • Proactive engine — Instead of waiting for requests, Alive monitors your workflow and pre-builds skills it predicts you'll need.

  • Federated skill evolution — When many users build similar skills, the best implementations float to the top of the community catalog. Skills evolve across the network.

  • Memory layer — Alive remembers context across sessions. "Last time you scraped that site, the structure was..."

  • Multi-agent composition — Multiple Alive instances collaborating, each with different skill specializations.


License

MIT. Do whatever you want with it.


Alive starts with 3 skills. How many will yours have?

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alive_agent-0.1.0.tar.gz (81.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

alive_agent-0.1.0-py3-none-any.whl (72.0 kB view details)

Uploaded Python 3

File details

Details for the file alive_agent-0.1.0.tar.gz.

File metadata

  • Download URL: alive_agent-0.1.0.tar.gz
  • Upload date:
  • Size: 81.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for alive_agent-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9609afcf3a54a9f0801a0d9a2723ab8849af6fb214b7c7705095dd5fa7b3310d
MD5 ed4ba4710a25f5a2486206f8f4159f5a
BLAKE2b-256 619f59ce9aaa94a023d3d7de561065325537a6a7e2549ec23dee479a5f447800

See more details on using hashes here.

File details

Details for the file alive_agent-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: alive_agent-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 72.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for alive_agent-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5d8ddfd5060aa5dbe3aff68a3434e846f43f3e515bb58b416be0bfa61ccf7a7a
MD5 bc66fea8f70dfff43aa0a6dfcd45d97b
BLAKE2b-256 cbac71a2deb21ccf5c537776d2284663eacaaedcaf150faaa503b9377eeff831

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page