Skip to main content

AI coding agent powered by orxhestra — reads, writes, edits code and runs commands

Project description

orxhestra-code

AI coding agent for your terminal. Reads, writes, edits code and runs commands — powered by orxhestra.

Works with any LangChain-supported LLM provider.

Install

# Install with your preferred provider
uv pip install orxhestra-code[openai]       # GPT-5.4, o3, o4, etc.
uv pip install orxhestra-code[anthropic]    # Claude
uv pip install orxhestra-code[google]       # Gemini
uv pip install orxhestra-code[aws]          # Bedrock
uv pip install orxhestra-code[azure-ai]     # Azure OpenAI
uv pip install orxhestra-code[mistral]      # Mistral
uv pip install orxhestra-code[groq]         # Groq
uv pip install orxhestra-code[ollama]       # Ollama (local)
uv pip install orxhestra-code[fireworks]    # Fireworks
uv pip install orxhestra-code[together]     # Together
uv pip install orxhestra-code[cohere]       # Cohere
uv pip install orxhestra-code[deepseek]     # DeepSeek
uv pip install orxhestra-code[xai]          # xAI / Grok
uv pip install orxhestra-code[openrouter]   # OpenRouter (multi-provider)

# Or install all providers at once
uv pip install orxhestra-code[all]

Or from source:

git clone https://github.com/NicolaiLassen/orxhestra-code.git
cd orxhestra-code
uv sync

Usage

# Start with default model (Claude Sonnet)
orx-coder

# Use any LangChain provider
orx-coder --model anthropic/claude-sonnet-4-6
orx-coder --model openai/gpt-5.4
orx-coder --model google/gemini-2.5-pro
orx-coder --model mistral/mistral-large-latest
orx-coder --model groq/llama-3.3-70b-versatile
orx-coder --model ollama/qwen2.5-coder:32b
orx-coder --model deepseek/deepseek-chat
orx-coder --model xai/grok-3

# Control LLM reasoning effort (maps to provider-native params)
orx-coder --effort low      # fast responses, 5 iterations max
orx-coder --effort medium   # balanced reasoning, 15 iterations max
orx-coder --effort high     # deep reasoning, 30 iterations max (default)

# Effort maps to each provider's native reasoning API:
#   Anthropic  → thinking.budget_tokens
#   OpenAI     → reasoning.effort
#   Google     → thinking_level
#   xAI        → reasoning_effort
#   DeepSeek   → reasoning_effort

# Set max tokens per response
orx-coder --max-tokens 32768

# Work in a specific directory
orx-coder --workspace /path/to/project

# Pipe a command
echo "fix the failing tests" | orx-coder

What it can do

  • Read files, search with glob/grep
  • Write and edit files (sends diffs, not full rewrites)
  • Run shell commands (build, test, git, etc.)
  • Remember things across sessions (project context, preferences)
  • Track tasks with a structured todo list
  • Git workflow (commit, branch, PR creation)

Configuration

Create ~/.orx-coder/config.yaml for persistent defaults:

model: anthropic/claude-sonnet-4-6
effort: high
max_tokens: 16384
auto_approve_reads: true

Environment variables

Variable Description
ORX_MODEL Override model (e.g. openai/gpt-5.4)
ORX_EFFORT Override effort (low, medium, high)
ANTHROPIC_API_KEY Anthropic API key
OPENAI_API_KEY OpenAI API key
GOOGLE_API_KEY Google AI API key
GROQ_API_KEY Groq API key
MISTRAL_API_KEY Mistral API key
TOGETHER_API_KEY Together API key
FIREWORKS_API_KEY Fireworks API key

Project instructions

Create a CLAUDE.md (or .orx/instructions.md) in your project root with project-specific instructions. The agent loads these automatically.

# Project rules

- Use pytest for testing
- Follow PEP 8
- Always run tests before committing

Instructions are loaded from the current directory up to the filesystem root, so you can have global instructions in ~/CLAUDE.md and project-specific ones in your repo.

REPL commands

Command Description
/model <name> Switch model
/clear Clear conversation
/compact Summarize history to free context
/todos Show task list
/memory Browse saved memories
/help Show all commands
/exit Quit

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orxhestra_code-0.0.13.tar.gz (27.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orxhestra_code-0.0.13-py3-none-any.whl (26.4 kB view details)

Uploaded Python 3

File details

Details for the file orxhestra_code-0.0.13.tar.gz.

File metadata

  • Download URL: orxhestra_code-0.0.13.tar.gz
  • Upload date:
  • Size: 27.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.4.25

File hashes

Hashes for orxhestra_code-0.0.13.tar.gz
Algorithm Hash digest
SHA256 c543236a839c9ca6e846e1279ab04568db13110d59b44817cac8a4ed7a9e1104
MD5 0921b39d86a98196cfbb5f60f766f4fe
BLAKE2b-256 a349a339be8d8c13da47e59015f798bf867e839fe7a791476278c8003f317327

See more details on using hashes here.

File details

Details for the file orxhestra_code-0.0.13-py3-none-any.whl.

File metadata

File hashes

Hashes for orxhestra_code-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 ae667b97119d734eddcb57144937bedacad57f56d2c85417e2dd2c1de13fff56
MD5 bb06873c66e00791c1f3447187014140
BLAKE2b-256 819986c05b0f97b45ad1707a255c200cac4cdc175771d7fa7122a1bfddfb17b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page