Skip to main content

Minimal autonomous agent harness with LangGraph Deep Agents

Project description

open-strix

PyPI version

An AI agent that talks to you over Discord.

  • Memory blocks + files, all committed to Github
  • Web fetch/search, files, bash/powershell, subagents
  • Skills

Install uv

Install uv first:

# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

Official install docs (alternate methods like Homebrew, pipx, winget):

Quick start (recommended)

uvx open-strix setup --home my-agent --github
cd my-agent
uv run open-strix

Run setup explicitly first (uvx open-strix setup ...), then run uv run open-strix from inside that home.

open-strix setup bootstraps the target directory with:

  • state/
  • skills/
  • blocks/
  • logs/events.jsonl
  • logs/journal.jsonl
  • scheduler.yaml
  • config.yaml
  • checkpoint.md
  • pyproject.toml
  • uv.lock
  • .env (template)

It also runs:

  • uv init --bare --python 3.11 --vcs none --no-workspace
  • uv add open-strix
  • checks git commit identity and, if missing, prompts for:
    • user.name (defaults to directory name)
    • user.email
  • checks git remote and, if origin is missing, prompts for the remote URL
  • detects OS/tools and generates service bootstrap files in services/:
    • Linux: open-strix.service (systemd user unit)
    • macOS: ai.open-strix.<name>.plist (launchd agent)
    • Windows: Task Scheduler install/uninstall PowerShell scripts

It also prints a CLI walkthrough with links and step-by-step setup for:

  • MiniMax M2.5
  • Kimi/Moonshot
  • Discord bot creation + permissions
  • config.yaml values

Then uv run open-strix connects to Discord if a token is present (by default DISCORD_TOKEN). Otherwise it runs in local stdin mode.

Installed mode (optional)

If you prefer a local project install instead of uvx:

uv init --python 3.11
uv add open-strix
uv run open-strix setup --home .
uv run open-strix

Install and auth gh (GitHub CLI)

If you want open-strix setup --github, install and log into gh first.

Install:

# macOS (Homebrew)
brew install gh

# Ubuntu / Debian
sudo apt install gh
# Windows (winget)
winget install --id GitHub.cli

Authenticate:

gh auth login
gh auth status

Official docs:

Create a GitHub repo and set remote

open-strix auto-syncs with git after each turn, so set up a repo + remote early.

Recommended:

uvx open-strix setup --home my-agent --github

Keep this private, since agent memory and logs can contain sensitive context.

Manual fallback with GitHub CLI (gh):

cd my-agent
gh auth login
gh repo create <repo-name> --private --source=. --remote=origin
git add .
git commit -m "Initial commit"
git push -u origin HEAD

Manual fallback with GitHub web UI:

  1. Create a new private empty repo on GitHub (no README, no .gitignore, no license).
  2. In your project directory:
git init
git add .
git commit -m "Initial commit"
git branch -M main
git remote add origin git@github.com:<your-user>/<repo-name>.git
git push -u origin main

If you prefer HTTPS:

git remote add origin https://github.com/<your-user>/<repo-name>.git

Check remote config:

git remote -v

Environment setup

Start from the example env file:

cp .env.example .env

Default model setup in this project expects an Anthropic-compatible endpoint:

  • ANTHROPIC_API_KEY
  • ANTHROPIC_BASE_URL

Discord runtime uses:

  • DISCORD_TOKEN

Optional:

  • DISCORD_TEST_CHANNEL_ID
  • OPEN_STRIX_TEST_MODEL

Models

Default: MiniMax M2.5

This project defaults to:

  • model: MiniMax-M2.5 in config.yaml
  • provider prefix anthropic: internally (so the runtime uses anthropic:MiniMax-M2.5)

Use MiniMax's Anthropic-compatible endpoint in your .env:

  • ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic

MiniMax docs:

Alternative: Kimi K2.5

If you want Kimi instead of MiniMax:

  1. Point Anthropic-compatible env vars at Moonshot:
    • ANTHROPIC_BASE_URL=https://api.moonshot.ai/anthropic
  2. Set model in config.yaml to the current Kimi model ID you want.

Moonshot docs:

Note: the Moonshot update posted on November 8, 2025 references kimi-k2-thinking and kimi-k2-thinking-turbo. If you refer to these as "K2.5", use the exact current model IDs from Moonshot docs/console.

Model config behavior

config.yaml key:

  • model

Behavior:

  • If model has no : (example MiniMax-M2.5), open-strix treats it as Anthropic-provider and uses anthropic:<model>.
  • If model already includes provider:model (example openai:gpt-4o-mini), it is passed through unchanged.

Discord setup

Use Discord's Developer Portal web UI:

  1. General Information: set app/bot name and any basic metadata.
  2. Installation: set Install Link to None, then save.
  3. OAuth2 -> URL Generator:
    • check bot
    • select practical bot permissions (focus on messaging/reactions/history/attachments):
      • View Channels
      • Send Messages
      • Send Messages in Threads
      • Read Message History
      • Add Reactions
      • Attach Files
  4. Bot tab:
    • disable Public Bot
    • enable Message Content Intent
    • (later) set avatar/profile polish
  5. Bot tab -> Reset Token:
    • copy token immediately (it may not be shown again)
    • set .env: DISCORD_TOKEN=<your_discord_bot_token>
  6. Use the generated OAuth2 bot invite URL to add the bot to your server.

Reference docs for the same flow:

Where this is configured in open-strix:

  • Token env var name: config.yaml -> discord_token_env (default DISCORD_TOKEN)
  • Actual token value: your .env
  • Bot allowlist behavior: config.yaml -> always_respond_bot_ids

config.yaml tour

Default:

model: MiniMax-M2.5
journal_entries_in_prompt: 90
discord_messages_in_prompt: 10
discord_token_env: DISCORD_TOKEN
always_respond_bot_ids: []

Key meanings:

  • model: model name (or provider:model)
  • journal_entries_in_prompt: how many journal entries go into each prompt
  • discord_messages_in_prompt: how many recent Discord messages go into each prompt
  • discord_token_env: env var name to read Discord token from
  • always_respond_bot_ids: bot author IDs the agent is allowed to respond to

Related files:

  • scheduler.yaml: cron/time-of-day jobs
  • blocks/*.yaml: memory blocks surfaced in prompt context
  • checkpoint.md: returned by journal tool after a journal write
  • .open_strix_builtin_skills/scripts/prediction_review_log.py: helper for structured prediction-accuracy reviews (read-only packaged script)
  • skills/: user-editable local skills
  • /.open_strix_builtin_skills/skill-creator/SKILL.md: packaged built-in skill source mounted as read-only
  • /.open_strix_builtin_skills/prediction-review/SKILL.md: packaged built-in skill for prediction calibration

Runtime behavior note:

  • Git sync (git add -A -> commit -> push) runs automatically after each processed turn.
  • New agent homes are seeded with a twice-daily UTC scheduler job (09:00 and 21:00) for prediction-review calibration.

Personality bootstrap

Creating an agent is less about code, and a whole lot more about the time you spend talking to it. Lily Luo has a great post on forming agent personalities.

You should plan on spending time:

  • Communication patterns — correct the agent to know when and how often it should use the send_message and react tools. Agents often initially find it surprising that their final message is ignored, so they need to use their tools instead.
  • Talk about things you're interested in, see what the agent becomes interested in

Tests

uv run pytest -q

Discord coverage includes:

  • unit tests with mocked boundaries in tests/test_discord.py
  • live integration tests against real Discord in tests/test_discord_live.py

Live test env vars:

  • DISCORD_TOKEN (required for live connect test)
  • DISCORD_TEST_CHANNEL_ID (optional; enables live send-message test)

Safety baseline

  • Agent file writes/edits are blocked outside state/.
  • Reads still use repository scope.
  • This is intentionally simple and should not be treated as production-ready.

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

open_strix-0.1.21.tar.gz (46.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

open_strix-0.1.21-py3-none-any.whl (57.0 kB view details)

Uploaded Python 3

File details

Details for the file open_strix-0.1.21.tar.gz.

File metadata

  • Download URL: open_strix-0.1.21.tar.gz
  • Upload date:
  • Size: 46.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.4

File hashes

Hashes for open_strix-0.1.21.tar.gz
Algorithm Hash digest
SHA256 53b6a3438815b1c754a44be1801dac1aea36e76a7f03cf8948960b1e07c9198b
MD5 3b8f12c0a941ea0153d252374c006bb4
BLAKE2b-256 140df31cc2e1b2a51946aaafe77ca432c7440477dbcb0ce0bdc06b8dc6e4d6ab

See more details on using hashes here.

File details

Details for the file open_strix-0.1.21-py3-none-any.whl.

File metadata

File hashes

Hashes for open_strix-0.1.21-py3-none-any.whl
Algorithm Hash digest
SHA256 94d44eb19ad99601a87d072777434920fd62ba261a8ad5d40bf630c5b7918a84
MD5 2ffeb85ec70b6212415f12543a7ba1af
BLAKE2b-256 9708ba918d9d5f0df50e1bf64011cdc6ee0adada5c7065dee79f3685246fa4d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page