Skip to main content

A small, powerful CLI coding agent for open AI models

Project description

Swival Logo

Swival

A coding agent for any model. Documentation

Swival is a CLI coding agent built to be practical, reliable, and easy to use. It works with frontier models, but its main goal is to be as reliable as possible with smaller models, including local ones. It is designed from the ground up to handle tight context windows and limited resources without falling apart.

It connects to LM Studio, HuggingFace Inference API, OpenRouter, or any OpenAI-compatible server (ollama, llama.cpp, mlx_lm.server, vLLM, etc.), sends your task, and runs an autonomous tool loop until it produces an answer. With LM Studio it auto-discovers your loaded model, so there's nothing to configure. A few thousand lines of Python, no framework.

Quickstart

LM Studio

  1. Install LM Studio and load a model with tool-calling support. Recommended first model: qwen3-coder-next (great quality/speed tradeoff on local hardware). Crank the context size as high as your hardware allows.
  2. Start the LM Studio server.
  3. Install Swival:
uv tool install swival
  1. Run:
swival "Refactor the error handling in src/api.py"

That's it. Swival finds the model, connects, and goes to work.

HuggingFace

export HF_TOKEN=hf_...
uv tool install swival
swival "Refactor the error handling in src/api.py" \
    --provider huggingface --model zai-org/GLM-5

You can also point it at a dedicated endpoint with --base-url and --api-key.

OpenRouter

export OPENROUTER_API_KEY=sk_or_...
uv tool install swival
swival "Refactor the error handling in src/api.py" \
    --provider openrouter --model z-ai/glm-5

Generic (OpenAI-compatible)

swival "Refactor the error handling in src/api.py" \
    --provider generic \
    --base-url http://127.0.0.1:8080 \
    --model my-model

Works with ollama, llama.cpp, mlx_lm.server, vLLM, and anything else that speaks the OpenAI chat completions protocol. No API key required for local servers.

Interactive sessions

swival --repl

The REPL carries conversation history across questions, which makes it good for exploratory work and longer tasks.

Updates and uninstall

uv tool upgrade swival    # update
uv tool uninstall swival  # remove

What makes it different

Reliable with small models. Context management is one of Swival's strengths. It keeps things clean and focused, which is especially important when you are working with models that have tight context windows. Graduated compaction, persistent thinking notes, and a todo checklist all survive context resets, so the agent doesn't lose track of multi-step plans even under pressure.

Your models, your way. Works with LM Studio, HuggingFace Inference API, OpenRouter, and any OpenAI-compatible server. With LM Studio, it auto-discovers whatever model you have loaded. With HuggingFace or OpenRouter, point it at any supported model. With the generic provider, connect to ollama, llama.cpp, mlx_lm.server, vLLM, or any other compatible server. You pick the model and the infrastructure.

Review loop and LLM-as-a-judge. Swival has a configurable review loop that can run external reviewer scripts or use a built-in LLM-as-judge to automatically evaluate and retry agent output. Good for quality assurance on tasks that matter.

Built for benchmarking. Pass --report report.json and Swival writes a machine-readable evaluation report with per-call LLM timing, tool success/failure counts, context compaction events, and guardrail interventions. Useful for comparing models, settings, skills, and MCP servers systematically on real coding tasks.

Skills and MCP. Extend the agent with SKILL.md-based skills for reusable workflows, and connect to external tools via the Model Context Protocol.

Small enough to read and hack. A few thousand lines of Python across a handful of files, with no framework underneath. Read the whole agent in an afternoon. If something doesn't work the way you want, change it.

CLI-native. stdout is exclusively the final answer. All diagnostics go to stderr. Pipe Swival's output straight into another command or a file.

Documentation

Full documentation is available at swival.dev.

  • Getting Started -- installation, first run, what happens under the hood
  • Usage -- one-shot mode, REPL mode, CLI flags, piping, exit codes
  • Tools -- what the agent can do: file ops, search, editing, web fetching, thinking, task tracking, command execution
  • Safety and Sandboxing -- path resolution, symlink protection, command whitelisting, YOLO mode
  • Skills -- creating and using SKILL.md-based agent skills
  • Customization -- config files, project instructions, system prompt overrides, tuning parameters
  • Providers -- LM Studio, HuggingFace, OpenRouter, and generic OpenAI-compatible server configuration
  • MCP -- connecting external tool servers via the Model Context Protocol
  • Reports -- JSON reports for benchmarking and evaluation
  • Reviews -- external reviewer scripts for automated QA and LLM-as-judge evaluation
  • Using Swival with AgentFS -- copy-on-write filesystem sandboxing for safe agent runs

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swival-0.1.13.tar.gz (714.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swival-0.1.13-py3-none-any.whl (94.9 kB view details)

Uploaded Python 3

File details

Details for the file swival-0.1.13.tar.gz.

File metadata

  • Download URL: swival-0.1.13.tar.gz
  • Upload date:
  • Size: 714.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for swival-0.1.13.tar.gz
Algorithm Hash digest
SHA256 fba3ca93cc60b3c2e8736a6cde1913509aa77c4a6ae2e76a440960445cac0747
MD5 0c6937b76c87c51f29061004fceb2eef
BLAKE2b-256 32b51c7ebbbb9dea8f3fd1c061f9701587bca0559c8f2f5e4c77b3ade68f34ba

See more details on using hashes here.

File details

Details for the file swival-0.1.13-py3-none-any.whl.

File metadata

  • Download URL: swival-0.1.13-py3-none-any.whl
  • Upload date:
  • Size: 94.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for swival-0.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 7dae8b296f5a6bf839feca83fd907738ea269b77dcd19d8401256c6012123bb8
MD5 b06b6f7c365e38748b646c7c3a28ccb5
BLAKE2b-256 7ee87302e105919f5e7c67a320d6159accf36bc60c552a4b4c3bd8891289266a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page