Skip to main content

Lightweight multi-model coding agent CLI

Project description

lite-cc

A minimal, multi-model coding agent runtime for the terminal.

Quick Start · How It Works · Plugins & Skills · Configuration · Safety


litecc demo

lite-cc (litecc) is a lightweight, provider-agnostic coding agent for the terminal. It connects to any LLM via LiteLLM, runs an autonomous tool loop, and extends its capabilities through plugins and skills.

Key Features

  • Multi-model — OpenAI, Anthropic, OCI, Gemini, Groq, Ollama, and any provider LiteLLM supports.
  • Autonomous tool loop — Reasons, calls tools, observes results, and iterates until the task is done.
  • Built-in toolsbash, read_file, write_file, list_files, grep, and use_skill out of the box.
  • Claude Code plugin compatible — Load plugins and skills using the same format as Claude Code. Skills are loaded on demand to keep context lean.
  • Safe by default — Dangerous commands are blocked. File access is scoped to the project directory. Fully autonomous, no prompts.
  • Structured output — Colored, timestamped progress logs show exactly what the agent is doing.

Quick Start

# Install from PyPI
pip install lite-cc

# Or with uv
uv pip install lite-cc

# Or install from source
git clone https://github.com/key4ng/lite-cc.git
cd lite-cc && pip install -e .
# Run
litecc run "list all Python files and describe what each one does"

# Run with a plugin
litecc run "analyze cc/agent.py" --plugin-dir examples/code-analyst

Output:

14:32:05 [litecc]  Using model: grok-4-1-fast
14:32:05 [litecc]  Starting task...
14:32:06 [tool]    list_files: **/*.py
14:32:07 [tool]    read_file: cc/agent.py
14:32:08 [grok-4-1-fast] I'll describe each file...
14:32:09 [litecc]  Here are the Python files...

How It Works

litecc run "fix the failing tests"
        │
        ▼
  Load config, plugins, skills
        │
        ▼
  Build system prompt + tool definitions
        │
        ▼
  ┌─ Agent Loop ──────────────────────────┐
  │  1. Send messages + tools to LLM      │
  │  2. LLM returns tool calls            │
  │     → execute safely → append results │
  │  3. LLM returns text → done           │
  └───────────────────────────────────────┘

The loop runs until the model produces a final answer or hits the max iteration limit.

Usage

# Basic task
litecc run "fix the failing tests"

# Choose a model
litecc run "refactor this module" --model anthropic/claude-3-sonnet-20240229

# Load plugins
litecc run "triage the latest ticket" --plugin-dir ~/my-plugin
litecc run "check health" --plugin-dir ~/plugin-a --plugin-dir ~/plugin-b

# Different project directory
litecc run "explain the architecture" --project-dir ~/other-repo

# Verbose output (show tool results, full reasoning)
litecc run "explore the codebase" -v

# Limit iterations
litecc run "explore the codebase" --max-iterations 20
CLI Reference
Usage: litecc run [OPTIONS] PROMPT

Options:
  --plugin-dir TEXT      Plugin directory (repeatable)
  --model TEXT           LiteLLM model string
  --max-iterations INT   Max tool loop iterations (default: 50)
  --project-dir TEXT     Working directory (default: cwd)
  -v, --verbose          Show detailed tool output
  --help                 Show this message and exit

Plugins & Skills

lite-cc uses a plugin format compatible with Claude Code. Plugins provide domain knowledge and reusable workflows.

Plugin Structure

my-plugin/
  .claude-plugin/
    plugin.json          # Manifest (required)
  CLAUDE.md              # Instructions injected into system prompt
  pipeline/
    deploy-check/
      SKILL.md           # Skill with YAML frontmatter
  commands/
    triage.md            # Command-style skill

Skill Format

---
name: deploy-check
description: Verify a deployment is healthy by checking pod status and logs.
---

# Deploy Check

## Steps

1. Check pod status:
   ﹩bash
   kubectl get pods -n <NAMESPACE> -o wide
   ﹩

2. Review recent events and summarize findings.

How It Works

  1. On startup, --plugin-dir directories are scanned for .claude-plugin/plugin.json
  2. CLAUDE.md is injected into the system prompt
  3. Skills are indexed by name and description — the model sees the list but not the full content
  4. When needed, the model calls use_skill("deploy-check") to load the full instructions
  5. The skill content is injected into the conversation and the model follows the steps

This keeps context lean — only the skills actually needed are loaded.

Configuration

Config is resolved in order of precedence (highest wins):

Priority Source Example
1 CLI flags --model openai/gpt-4o
2 Environment variables CC_MODEL=openai/gpt-4o
3 Config file ~/.cc/config.yaml
4 Defaults oci/xai.grok-4-1-fast-reasoning

Environment Variables

Variable Default Description
CC_MODEL oci/xai.grok-4-1-fast-reasoning LiteLLM model identifier
CC_OCI_REGION us-chicago-1 OCI region for inference
CC_OCI_COMPARTMENT OCI compartment OCID (required for oci/ models)
CC_OCI_CONFIG_PROFILE DEFAULT OCI config profile
CC_MAX_ITERATIONS 50 Max agent loop iterations
CC_TIMEOUT 120 Per-command timeout (seconds)
YAML config example

Create ~/.cc/config.yaml:

model: oci/xai.grok-4-1-fast-reasoning
oci_region: us-chicago-1
oci_compartment: ocid1.tenancy.oc1..aaaaaaaexample
max_iterations: 50
timeout: 120

Supported Models

Any LiteLLM provider works out of the box:

Provider Model Example Auth
OpenAI openai/gpt-4o OPENAI_API_KEY
Anthropic anthropic/claude-3-sonnet-20240229 ANTHROPIC_API_KEY
OCI GenAI oci/xai.grok-4-1-fast-reasoning ~/.oci/config session token
Gemini gemini/gemini-pro GEMINI_API_KEY
Groq groq/llama3-70b-8192 GROQ_API_KEY
Ollama ollama/llama3 Local server

Built-in Tools

Tool Description
bash Shell execution with safety checks, output truncation, and timeout
read_file Read files with optional line range (offset, limit)
write_file Create or overwrite files (auto-creates parent dirs)
list_files Glob pattern search (e.g., **/*.py)
grep Recursive regex search across files
use_skill Load a skill's instructions into the conversation

All file tools are scoped to the project directory. Bash commands are checked against a deny list before execution.

Safety

lite-cc enforces safety guardrails at the tool execution layer — no user prompts, just deny and report.

Blocked Commands

Category Patterns
File deletion rm, rmdir, unlink
Privilege escalation sudo, su, doas
System control shutdown, reboot, halt
Disk operations mkfs, fdisk, dd
Process control kill, killall, pkill
Destructive git git push --force, git clean
Remote code exec curl ... | sh, wget ... | bash

Path Restrictions

  • All file operations resolve inside the project directory
  • Path traversal (../../etc/passwd) is detected and blocked
  • Sensitive paths blocked: ~/.ssh, ~/.aws, /etc, /private

Output Limits

  • 2000 lines or 100KB per command (whichever is first)
  • 120s timeout (configurable via CC_TIMEOUT)

The safety layer is a guardrail, not a security boundary. It prevents common destructive operations in an autonomous loop.

Architecture

cc/
  cli.py              # Click CLI entry point
  config.py           # Layered config (CLI > env > yaml > defaults)
  agent.py            # Core tool loop with progress logging
  llm.py              # LiteLLM wrapper with OCI auth
  safety.py           # Command deny list + path checks
  output.py           # Colored terminal output
  tools/              # Built-in tool implementations
  plugins/            # Plugin discovery + skill indexing

Development

pip install -e .        # Install
pytest -v               # Run all 35 tests
pytest -k "safety" -v   # Run tests by pattern

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lite_cc-0.2.2.tar.gz (27.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lite_cc-0.2.2-py3-none-any.whl (24.3 kB view details)

Uploaded Python 3

File details

Details for the file lite_cc-0.2.2.tar.gz.

File metadata

  • Download URL: lite_cc-0.2.2.tar.gz
  • Upload date:
  • Size: 27.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lite_cc-0.2.2.tar.gz
Algorithm Hash digest
SHA256 1d2cc34c7814b16748ed7ca88ab9e0b53b1f117b705aad1a430aadc6f7017b6c
MD5 acb75c3c11b2392b2f289a5cc37f719c
BLAKE2b-256 a9b91eef95cc3a9360a46435ba975b2d22f357aca56d41d69a23e938d2926471

See more details on using hashes here.

Provenance

The following attestation bundles were made for lite_cc-0.2.2.tar.gz:

Publisher: release.yml on key4ng/lite-cc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lite_cc-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: lite_cc-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 24.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lite_cc-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0801726c6dc5bf6d728861ea9dd5d4749aedacb36e9321c3a6f4d34f413ba3cd
MD5 9cc7829c974e4827e4c3fb41b1ade949
BLAKE2b-256 71575a2ede4ef5e529246f7ffb6136fdf022db013363c7f5fc914c6bb81d5e65

See more details on using hashes here.

Provenance

The following attestation bundles were made for lite_cc-0.2.2-py3-none-any.whl:

Publisher: release.yml on key4ng/lite-cc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page