Skip to main content

Lightweight multi-model coding agent CLI

Project description

lite-cc

A minimal, multi-model coding agent runtime for the terminal.

Quick Start · How It Works · Plugins & Skills · Configuration · Safety


litecc demo

lite-cc (litecc) is a lightweight, provider-agnostic coding agent for the terminal. It connects to any LLM via LiteLLM, runs an autonomous tool loop, and extends its capabilities through plugins and skills.

Key Features

  • Multi-model — OpenAI, Anthropic, OCI, Gemini, Groq, Ollama, and any provider LiteLLM supports.
  • Autonomous tool loop — Reasons, calls tools, observes results, and iterates until the task is done.
  • Plugin & skill system — Load plugins to inject domain knowledge. Skills are loaded on demand to keep context lean.
  • Safe by default — Dangerous commands are blocked. File access is scoped to the project directory. Fully autonomous, no prompts.
  • Structured output — Colored, timestamped progress logs show exactly what the agent is doing.

Quick Start

# Install
git clone https://github.com/key4ng/lite-cc.git
cd lite-cc
pip install -e .

# Run
litecc run "list all Python files and describe what each one does"

# Run with the example plugin
litecc run "analyze cc/agent.py" --plugin-dir examples/code-analyst

Output:

14:32:05 [litecc]  Using model: gpt-5.2
14:32:05 [litecc]  Starting task...
14:32:06 [tool]    list_files: **/*.py
14:32:07 [tool]    read_file: cc/agent.py
14:32:08 [gpt-5.2] I'll describe each file...
14:32:09 [litecc]  Here are the Python files...

How It Works

litecc run "fix the failing tests"
        │
        ▼
  Load config, plugins, skills
        │
        ▼
  Build system prompt + tool definitions
        │
        ▼
  ┌─ Agent Loop ──────────────────────────┐
  │  1. Send messages + tools to LLM      │
  │  2. LLM returns tool calls            │
  │     → execute safely → append results │
  │  3. LLM returns text → done           │
  └───────────────────────────────────────┘

The loop runs until the model produces a final answer or hits the max iteration limit.

Usage

# Basic task
litecc run "fix the failing tests"

# Choose a model
litecc run "refactor this module" --model anthropic/claude-3-sonnet-20240229

# Load plugins
litecc run "triage the latest ticket" --plugin-dir ~/my-plugin
litecc run "check health" --plugin-dir ~/plugin-a --plugin-dir ~/plugin-b

# Different project directory
litecc run "explain the architecture" --project-dir ~/other-repo

# Verbose output (show tool results, full reasoning)
litecc run "explore the codebase" -v

# Limit iterations
litecc run "explore the codebase" --max-iterations 20
CLI Reference
Usage: litecc run [OPTIONS] PROMPT

Options:
  --plugin-dir TEXT      Plugin directory (repeatable)
  --model TEXT           LiteLLM model string
  --max-iterations INT   Max tool loop iterations (default: 50)
  --project-dir TEXT     Working directory (default: cwd)
  -v, --verbose          Show detailed tool output
  --help                 Show this message and exit

Plugins & Skills

lite-cc uses a plugin format compatible with Claude Code. Plugins provide domain knowledge and reusable workflows.

Plugin Structure

my-plugin/
  .claude-plugin/
    plugin.json          # Manifest (required)
  CLAUDE.md              # Instructions injected into system prompt
  pipeline/
    deploy-check/
      SKILL.md           # Skill with YAML frontmatter
  commands/
    triage.md            # Command-style skill

Skill Format

---
name: deploy-check
description: Verify a deployment is healthy by checking pod status and logs.
---

# Deploy Check

## Steps

1. Check pod status:
   ﹩bash
   kubectl get pods -n <NAMESPACE> -o wide
   ﹩

2. Review recent events and summarize findings.

How It Works

  1. On startup, --plugin-dir directories are scanned for .claude-plugin/plugin.json
  2. CLAUDE.md is injected into the system prompt
  3. Skills are indexed by name and description — the model sees the list but not the full content
  4. When needed, the model calls use_skill("deploy-check") to load the full instructions
  5. The skill content is injected into the conversation and the model follows the steps

This keeps context lean — only the skills actually needed are loaded.

Configuration

Config is resolved in order of precedence (highest wins):

Priority Source Example
1 CLI flags --model openai/gpt-4o
2 Environment variables CC_MODEL=openai/gpt-4o
3 Config file ~/.cc/config.yaml
4 Defaults oci/openai.gpt-5.2

Environment Variables

Variable Default Description
CC_MODEL oci/openai.gpt-5.2 LiteLLM model identifier
CC_OCI_REGION us-chicago-1 OCI region for inference
CC_OCI_COMPARTMENT OCI compartment OCID (required for oci/ models)
CC_OCI_CONFIG_PROFILE DEFAULT OCI config profile
CC_MAX_ITERATIONS 50 Max agent loop iterations
CC_TIMEOUT 120 Per-command timeout (seconds)
YAML config example

Create ~/.cc/config.yaml:

model: oci/openai.gpt-5.2
oci_region: us-chicago-1
oci_compartment: ocid1.tenancy.oc1..aaaaaaaexample
max_iterations: 50
timeout: 120

Supported Models

Any LiteLLM provider works out of the box:

Provider Model Example Auth
OpenAI openai/gpt-4o OPENAI_API_KEY
Anthropic anthropic/claude-3-sonnet-20240229 ANTHROPIC_API_KEY
OCI GenAI oci/openai.gpt-5.2 ~/.oci/config session token
Gemini gemini/gemini-pro GEMINI_API_KEY
Groq groq/llama3-70b-8192 GROQ_API_KEY
Ollama ollama/llama3 Local server

Built-in Tools

Tool Description
bash Shell execution with safety checks, output truncation, and timeout
read_file Read files with optional line range (offset, limit)
write_file Create or overwrite files (auto-creates parent dirs)
list_files Glob pattern search (e.g., **/*.py)
grep Recursive regex search across files
use_skill Load a skill's instructions into the conversation

All file tools are scoped to the project directory. Bash commands are checked against a deny list before execution.

Safety

lite-cc enforces safety guardrails at the tool execution layer — no user prompts, just deny and report.

Blocked Commands

Category Patterns
File deletion rm, rmdir, unlink
Privilege escalation sudo, su, doas
System control shutdown, reboot, halt
Disk operations mkfs, fdisk, dd
Process control kill, killall, pkill
Destructive git git push --force, git clean
Remote code exec curl ... | sh, wget ... | bash

Path Restrictions

  • All file operations resolve inside the project directory
  • Path traversal (../../etc/passwd) is detected and blocked
  • Sensitive paths blocked: ~/.ssh, ~/.aws, /etc, /private

Output Limits

  • 2000 lines or 100KB per command (whichever is first)
  • 120s timeout (configurable via CC_TIMEOUT)

The safety layer is a guardrail, not a security boundary. It prevents common destructive operations in an autonomous loop.

Architecture

cc/
  cli.py              # Click CLI entry point
  config.py           # Layered config (CLI > env > yaml > defaults)
  agent.py            # Core tool loop with progress logging
  llm.py              # LiteLLM wrapper with OCI auth
  safety.py           # Command deny list + path checks
  output.py           # Colored terminal output
  tools/              # Built-in tool implementations
  plugins/            # Plugin discovery + skill indexing

Development

pip install -e .        # Install
pytest -v               # Run all 35 tests
pytest -k "safety" -v   # Run tests by pattern

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lite_cc-0.1.0.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lite_cc-0.1.0-py3-none-any.whl (21.5 kB view details)

Uploaded Python 3

File details

Details for the file lite_cc-0.1.0.tar.gz.

File metadata

  • Download URL: lite_cc-0.1.0.tar.gz
  • Upload date:
  • Size: 23.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lite_cc-0.1.0.tar.gz
Algorithm Hash digest
SHA256 fd8edbabccf47c20a614e9ffa7e9504848cf2ec362923a2426c467ade5987f28
MD5 27f50b4f934b09d83382f68685a3e406
BLAKE2b-256 8122c556e80a85a290fe18ae92d20a20bd044564beffe1ef0e25511655897d27

See more details on using hashes here.

Provenance

The following attestation bundles were made for lite_cc-0.1.0.tar.gz:

Publisher: release.yml on key4ng/lite-cc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lite_cc-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: lite_cc-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lite_cc-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2fd6e62e619a6c7e36871896b045665dba7fb2a41140cda9f8c70f67a6f25b82
MD5 d84a71e332e305bfdef53059cb169bcc
BLAKE2b-256 85eaff15010fab629e53a36c506b4bff59254e9548c5e75685e325a1840e6947

See more details on using hashes here.

Provenance

The following attestation bundles were made for lite_cc-0.1.0-py3-none-any.whl:

Publisher: release.yml on key4ng/lite-cc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page