Minimal autonomous agent harness with LangGraph Deep Agents
Project description
open-strix
Minimal, non-production autonomous agent harness built with LangGraph Deep Agents.
Install uv
Install uv first:
# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Official install docs (alternate methods like Homebrew, pipx, winget):
Quick start (recommended)
uvx open-strix setup --home my-agent --github
cd my-agent
uvx open-strix
If you run uvx open-strix in a plain directory with no git repo, it now auto-runs setup first.
open-strix setup bootstraps the target directory with:
state/skills/blocks/logs/events.jsonllogs/journal.jsonlscheduler.yamlconfig.yamlcheckpoint.md.env(template)
It also prints a CLI walkthrough with links and step-by-step setup for:
- MiniMax M2.5
- Kimi/Moonshot
- Discord bot creation + permissions
config.yamlvalues
Then uvx open-strix connects to Discord if a token is present (by default DISCORD_TOKEN).
Otherwise it runs in local stdin mode.
Installed mode (optional)
If you prefer a local project install instead of uvx:
uv init --python 3.11
uv add open-strix
uv run open-strix setup --home .
uv run open-strix
Install and auth gh (GitHub CLI)
If you want open-strix setup --github, install and log into gh first.
Install:
# macOS (Homebrew)
brew install gh
# Ubuntu / Debian
sudo apt install gh
# Windows (winget)
winget install --id GitHub.cli
Authenticate:
gh auth login
gh auth status
Official docs:
Create a GitHub repo and set remote
open-strix auto-syncs with git after each turn, so set up a repo + remote early.
Recommended:
uvx open-strix setup --home my-agent --github
Keep this private, since agent memory and logs can contain sensitive context.
Manual fallback with GitHub CLI (gh):
cd my-agent
gh auth login
gh repo create <repo-name> --private --source=. --remote=origin
git add .
git commit -m "Initial commit"
git push -u origin HEAD
Manual fallback with GitHub web UI:
- Create a new private empty repo on GitHub (no README, no
.gitignore, no license). - In your project directory:
git init
git add .
git commit -m "Initial commit"
git branch -M main
git remote add origin git@github.com:<your-user>/<repo-name>.git
git push -u origin main
If you prefer HTTPS:
git remote add origin https://github.com/<your-user>/<repo-name>.git
Check remote config:
git remote -v
Environment setup
Start from the example env file:
cp .env.example .env
Default model setup in this project expects an Anthropic-compatible endpoint:
ANTHROPIC_API_KEYANTHROPIC_BASE_URL
Discord runtime uses:
DISCORD_TOKEN
Optional:
DISCORD_TEST_CHANNEL_IDOPEN_STRIX_TEST_MODEL
Models
Default: MiniMax M2.5
This project defaults to:
model: MiniMax-M2.5inconfig.yaml- provider prefix
anthropic:internally (so the runtime usesanthropic:MiniMax-M2.5)
Use MiniMax's Anthropic-compatible endpoint in your .env:
ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic
MiniMax docs:
- Anthropic compatibility + model IDs: https://platform.minimax.io/docs/api-reference/text-anthropic-api
- AI coding tools guide (M2.5 context): https://platform.minimax.io/docs/guides/text-ai-coding-tools
Alternative: Kimi K2.5
If you want Kimi instead of MiniMax:
- Point Anthropic-compatible env vars at your Moonshot endpoint (see Moonshot docs for current endpoint details).
- Set
modelinconfig.yamlto the current Kimi model ID you want.
Moonshot docs:
- Docs overview: https://platform.moonshot.ai/docs/overview
- K2 update post (links to current quick-start): https://platform.moonshot.ai/blog/posts/Kimi_API_Newsletter
Note: the Moonshot update posted on November 8, 2025 references kimi-k2-thinking and kimi-k2-thinking-turbo. If you refer to these as "K2.5", use the exact current model IDs from Moonshot docs/console.
Model config behavior
config.yaml key:
model
Behavior:
- If
modelhas no:(exampleMiniMax-M2.5), open-strix treats it as Anthropic-provider and usesanthropic:<model>. - If
modelalready includesprovider:model(exampleopenai:gpt-4o-mini), it is passed through unchanged.
Discord setup
Use Discord's Developer Portal web UI:
- Go to https://discord.com/developers/applications and create a new application.
- Open your app, then go to the
Bottab. - Under
Token, generate/reset token and copy it (you won't be able to re-view it later). - In the same
Bottab, enableMessage Content Intent(required for open-strix message handling). - Go to
Installation. - Under
Installation Contexts, enableGuild Install. - Under
Install Link, pickDiscord Provided Link. - Under
Default Install Settings:Guild Installscopes: selectbot(andapplications.commandsif you plan to add slash commands).Permissions: for this bot, a practical baseline is:View ChannelsSend MessagesSend Messages in ThreadsRead Message HistoryAdd Reactions
- Copy the generated install link, open it in your browser, pick your server, and authorize.
Reference docs for the same flow:
- Getting started (app creation + installation flow): https://docs.discord.com/developers/quick-start/getting-started
- OAuth2 scopes/install links: https://docs.discord.com/developers/topics/oauth2
- Permissions reference: https://docs.discord.com/developers/topics/permissions
- Gateway + intents reference: https://docs.discord.com/developers/events/gateway
Where this is configured in open-strix:
- Token env var name:
config.yaml->discord_token_env(defaultDISCORD_TOKEN) - Actual token value: your
.env - Bot allowlist behavior:
config.yaml->always_respond_bot_ids
config.yaml tour
Default:
model: MiniMax-M2.5
journal_entries_in_prompt: 90
discord_messages_in_prompt: 10
discord_token_env: DISCORD_TOKEN
always_respond_bot_ids: []
Key meanings:
model: model name (orprovider:model)journal_entries_in_prompt: how many journal entries go into each promptdiscord_messages_in_prompt: how many recent Discord messages go into each promptdiscord_token_env: env var name to read Discord token fromalways_respond_bot_ids: bot author IDs the agent is allowed to respond to
Related files:
scheduler.yaml: cron/time-of-day jobsblocks/*.yaml: memory blocks surfaced in prompt contextcheckpoint.md: returned byjournaltool after a journal writeskills/: user-editable local skills/.open_strix_builtin_skills/skill-creator/SKILL.md: packaged built-in skill source mounted as read-only
Runtime behavior note:
- Git sync (
git add -A-> commit -> push) runs automatically after each processed turn.
Personality bootstrap
Creating an agent is less about code, and a whole lot more about the time you spend talking to it. Lily Luo has a great post on forming agent personalities.
You should plan on spending time:
- Communication patterns — correct the agent to know when and how often it should use the
send_messageandreacttools. Agents often initially find it surprising that their final message is ignored, so they need to use their tools instead. - Talk about things you're interested in, see what the agent becomes interested in
Tests
uv run pytest -q
Discord coverage includes:
- unit tests with mocked boundaries in
tests/test_discord.py - live integration tests against real Discord in
tests/test_discord_live.py
Live test env vars:
DISCORD_TOKEN(required for live connect test)DISCORD_TEST_CHANNEL_ID(optional; enables live send-message test)
Safety baseline
- Agent file writes/edits are blocked outside
state/. - Reads still use repository scope.
- This is intentionally simple and should not be treated as production-ready.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file open_strix-0.1.10.tar.gz.
File metadata
- Download URL: open_strix-0.1.10.tar.gz
- Upload date:
- Size: 179.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e7455114b299f241a3e9ce4b7f31673bd1bba0a7942a43de038d4e70500b29b0
|
|
| MD5 |
734d9df1e5e8616c2194bb73f989a065
|
|
| BLAKE2b-256 |
e0150173613043a26ff6ba62e3300c08418d822ff63a332720cc11adae34cc6e
|
File details
Details for the file open_strix-0.1.10-py3-none-any.whl.
File metadata
- Download URL: open_strix-0.1.10-py3-none-any.whl
- Upload date:
- Size: 34.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5c6456567b6136f3cfab3076b89cf95df9809603985da46de96138efdb98bc5b
|
|
| MD5 |
b8847c0b36a425fc796ebd3c47327510
|
|
| BLAKE2b-256 |
f1b49ef2a3dcde1b2171f6510df76950e47cdbcfc3d268c3eb3d8649c957d0bc
|