Tiny local coding CLI with a small tool surface
Reason this release was yanked:
broken readme parser
Project description
oy-cli
AI coding assistant for your shell. Reads files, searches content, and runs commands.
uv tool install oy-cli
oy "add docstrings to public functions"
Examples
# Basic usage
oy "inspect the main module and suggest improvements"
# Work in a specific directory
OY_ROOT=./my-project oy "fix the failing tests"
# Non-interactive mode (CI/pipelines)
echo "update the changelog" | OY_NON_INTERACTIVE=1 oy
# Security audit
oy audit
oy audit "focus on authentication"
Commands
oy "prompt" # Run with a prompt (default)
oy chat # Interactive multi-turn session
oy audit # Security audit against OWASP ASVS/MASVS
oy model # Show current model, pick model from available endpoints
oy --help # Show all commands
Why This Exists
oy is small, auditable, and built around a narrow tool surface.
Design goals: small auditable codebase, minimal tool surface, OpenAI-completions-focused CLI loop, multiple backends behind shims, new session each run, and explicit checkpoints when needed.
System Prompts
The system prompt is short. Tool semantics live with the tool definitions; the system prompt focuses on operating rules and judgment:
Base Prompt
You are oy, a coding cli with tools.
Inspect before editing with `read` for file content, `search` for regex matches, and `list` for path discovery. For existing code changes, prefer syntax-aware edits via `ast-grep` run through `bash`. Keep edits small, auditable, and verified with `read`, `git diff`, and batch independent tool calls.
Keep going until done or blocked; if blocked, say what you tried and next steps.
Use grugbrain.dev approach for maintainability/simplicity, OWASP-minded judgment for security, and performance-aware programming (Computer, Enhance!).
Interactive Appendix
Use ask only when clarification or direction is needed.
Non-Interactive Appendix
Non-interactive mode: do not pause for approval.
Tools
Each tool description is passed directly to the model:
| Tool | Description |
|---|---|
list |
List paths by calling Path.glob(path). Defaults to path: "*". Use src/* or src/**/*.py exactly like pathlib glob patterns. Returns sorted entries, one per line, with / for directories. |
read |
Read a file or directory. Files return line-numbered text. Directories return sorted entries, one per line, with / for directories. Use offset and limit for large files. |
bash |
Shell commands are easy to run. For edits, prefer ast-grep for precise search/replace, scc for code-count analysis, and xh for web/API interaction; pipe to rg or yq for filtering when useful. These tools are effective for their niches, guaranteed to be available during an oy run, and their current usage docs can be checked with --help. For inspection, prefer the search tool. Returns structured results with command, exit_code, ok, output_format, output, and truncated. JSON output is parsed when possible. |
search |
Search with ripgrep JSON output. Takes pattern and path, then passes any extra ripgrep flags from args, for example pattern: 'needle', path: 'src', args: ['--glob', '*.py', '-i']. limit only limits displayed results after ripgrep runs. |
ask |
Ask the user a question in interactive runs. Use for ambiguity or decisions. Provide choices. |
Output truncation: tool output is clipped to preserve context window; bash summarizes output into a single output field and marks truncation with one truncated flag. When clipped, narrow the next query or use search with a tighter path instead of re-running broad inspection.
Conversation compaction: interactive chat compresses prepared context with Headroom before each model request, then falls back to omitting the oldest messages if the transcript still does not fit.
Parallel tool calls: oy can execute multiple tool calls returned in a single assistant turn. Explicit provider flags for parallel tool calls are only sent where the upstream API supports them directly today; other providers rely on their native tool-calling behavior.
Audit Command
oy audit runs the agent with a dedicated system prompt:
Audit Prompt
Audit the repo for security, unnecessary complexity, and major
performance issues, preserving project and human context.
First read key markdown docs, then refresh or generate an audit
header at the top of ISSUES.md that includes the current date,
the latest Git commit reference, and a codebase summary
using tools like `scc`. Next, fetch the current OWASP
ASVS (or MASVS if more relevant) and grugbrain.dev guidelines
using `bash` with `xh` (pipe to `rg` or `yq` if useful),
inspect the codebase against these, and write or
merge prioritised findings (max 10-15) into the ISSUES.md file.
Ensure each finding is formatted to include its location, category
(security, complexity, or performance), standard reference, a clear
recommendation, and has a Status, if existing findings have been
resolved, summarise and note them in a short log at the end.
oy audit # Full audit
oy audit "focus on auth" # With focus area
OY_ROOT=./src oy audit # Audit specific directory
Configuration
Environment variables:
| Variable | Purpose |
|---|---|
OY_MODEL |
Override model for this session (bare name or shim:model) |
OY_SHIM |
Force a specific shim: openai, codex, gemini, claude, copilot, bedrock, or bedrock-mantle |
OY_NON_INTERACTIVE |
Set to 1 to disable checkpoints |
OY_ROOT |
Run against different workspace |
OY_SYSTEM_FILE |
Append extra system instructions |
OY_CONFIG |
Override config path (default: ~/.config/oy/config.json) |
Config file (~/.config/oy/config.json):
{"shim": "openai", "model": "glm-5"}
The shim field pins which backend to use regardless of what else is signed in.
Use oy model <filter> to pick interactively; it merges models from available
signed-in shims into a single list using shim:model prefixes.
On first run, if no model is configured, oy prompts you to pick one from
the available backends. Set OY_MODEL, OY_SHIM, or save a config with
oy model to pin behavior.
Model notes: From testing, glm-5 balances intelligence,
cost, and tool-use ability. kimi-k2.5 is another option.
The Artificial Analysis Comparison of Open Source Models
is a reference.
Requirements
- Python 3.14+
bashmiseinstalled and activated in the shell before launchingoy- (Optional helper CLIs;
oyauto-installs them on demand viamise):rg(ripgrep),ast-grep,scc,xh,yq - OpenAI API key or Codex local auth OR Gemini CLI OAuth credentials
(
~/.gemini/oauth_creds.json) OR Claude Code local auth OR AWS CLI configured for Bedrock
Installation
uv tool install oy-cli # Preferred
pip install oy-cli # Alternative
Authentication
OpenAI:
export OPENAI_API_KEY=sk-...
For OpenAI-compatible endpoints:
export OPENAI_BASE_URL=https://your-endpoint.example/v1
export OPENAI_API_KEY=...
Gemini, Claude, Copilot, and Codex (OpenAI) creds are introspected
and used, if creds are available oy model will show them in the model list.
AWS Bedrock: Uses your default AWS profile/region. Supports auto-refresh of stale SSO sessions.
export AWS_PROFILE=my-profile
export AWS_REGION=us-west-2
Troubleshooting
"Missing API credentials" -> Set OPENAI_API_KEY, sign in with codex,
gemini or claude, or configure AWS CLI (aws configure). For Bedrock:
ensure your profile has bedrock:InvokeModel permission.
"stdin is not a TTY" -> Piping input disables ask. Set OY_NON_INTERACTIVE=1 to make explicit.
"AWS SSO session is stale" -> Run aws sso login --use-device-code --no-browser.
"Missing helper tool" -> Install or activate mise, then rerun oy; oy assumes a working mise shell activation and auto-installs missing helper CLIs together through mise.
"mise is required; install and activate mise before running oy." -> Install mise, activate it in your shell, then relaunch oy.
Security
oy can run shell commands and modify files with your permissions. Treat it like any other local automation tool.
Recommended:
- run in a repo or workspace you trust
- mount only needed directories in containers
- avoid exposing long-lived secrets in the environment
- review generated changes before shipping
Protections: workspace-bound file access for built-in file tools and native boto3 credential resolution for Bedrock.
Links
- Issues - Known issues and audit findings
- Contributing - Development and release notes
License
Apache License 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file oy_cli-0.4.0.tar.gz.
File metadata
- Download URL: oy_cli-0.4.0.tar.gz
- Upload date:
- Size: 60.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74da5183455b9671ced69d1cdc389dab3d4a8504538fe38107fb1725986ccb56
|
|
| MD5 |
e58075cfa35b2ee79d41c125685828e0
|
|
| BLAKE2b-256 |
ee3b8afda0366b8f88470497a7cde370f540d3cb908d01c86fd87a853fb8635c
|
Provenance
The following attestation bundles were made for oy_cli-0.4.0.tar.gz:
Publisher:
release.yml on wagov-dtt/oy-cli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
oy_cli-0.4.0.tar.gz -
Subject digest:
74da5183455b9671ced69d1cdc389dab3d4a8504538fe38107fb1725986ccb56 - Sigstore transparency entry: 1121695611
- Sigstore integration time:
-
Permalink:
wagov-dtt/oy-cli@c9685bcfe0a4690e21cff4d72be6739fa9eee904 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/wagov-dtt
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c9685bcfe0a4690e21cff4d72be6739fa9eee904 -
Trigger Event:
release
-
Statement type:
File details
Details for the file oy_cli-0.4.0-py3-none-any.whl.
File metadata
- Download URL: oy_cli-0.4.0-py3-none-any.whl
- Upload date:
- Size: 53.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6e85c66801a462f355e37bf09f635a007153468b78cda90022aca038019afe6
|
|
| MD5 |
ebb00ba960779409438b4e9bba884601
|
|
| BLAKE2b-256 |
ee2480e6a3357f7c7daf863aa4235e5932ef90abf7fbc4c6ca2123c6b0c40216
|
Provenance
The following attestation bundles were made for oy_cli-0.4.0-py3-none-any.whl:
Publisher:
release.yml on wagov-dtt/oy-cli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
oy_cli-0.4.0-py3-none-any.whl -
Subject digest:
d6e85c66801a462f355e37bf09f635a007153468b78cda90022aca038019afe6 - Sigstore transparency entry: 1121695686
- Sigstore integration time:
-
Permalink:
wagov-dtt/oy-cli@c9685bcfe0a4690e21cff4d72be6739fa9eee904 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/wagov-dtt
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c9685bcfe0a4690e21cff4d72be6739fa9eee904 -
Trigger Event:
release
-
Statement type: