Skip to main content

Local-first CLI coding agent — tested with Gemma 4 26B via vLLM

Project description

DryDock

Python Version License

     ____             ____             _    
    |  _ \ _ __ _   _|  _ \  ___   ___| | __
    | | | | '__| | | | | | |/ _ \ / __| |/ /
    | |_| | |  | |_| | |_| | (_) | (__|   < 
    |____/|_|   \__, |____/ \___/ \___|_|\_\
                |___/                       

Local-first CLI coding agent. Chart your course. Execute with precision.

DryDock is a TUI coding assistant designed to work with local LLMs. It provides a conversational interface to your codebase — explore, modify, build, and test projects through natural language and a powerful set of tools.

[!IMPORTANT] DryDock is tested and optimized for Gemma 4 26B-A4B (26B MoE, 4B active parameters) served via vLLM. Other models and providers are supported (Mistral, OpenAI, Anthropic, Ollama) but are not as thoroughly tested. If you use a different model, expect to tune prompts and tool settings.

Tested Hardware + Model

Component Spec
GPUs 2x NVIDIA RTX 4060 Ti 16GB
Model Gemma-4-26B-A4B-it-AWQ-4bit
Serving vLLM (Docker image vllm/vllm-openai:gemma4)
Performance ~70 tok/s, 131K context, 0% timeouts
Active params 4B per token (MoE architecture — fast inference)

Install the model

# Download the model weights (requires HuggingFace access)
pip install huggingface-hub
huggingface-cli download casperhansen/gemma-4-26b-a4b-it-AWQ-4bit \
    --local-dir /path/to/models/Gemma-4-26B-A4B-it-AWQ-4bit

Start vLLM

docker run -d \
    --gpus all \
    --name gemma4 \
    -p 8000:8000 \
    -v /path/to/models:/models \
    --ipc=host \
    vllm/vllm-openai:gemma4 \
    --model /models/Gemma-4-26B-A4B-it-AWQ-4bit \
    --quantization compressed-tensors \
    --tensor-parallel-size 2 \
    --max-model-len 131072 \
    --max-num-seqs 2 \
    --gpu-memory-utilization 0.95 \
    --kv-cache-dtype fp8 \
    --served-model-name gemma4 \
    --trust-remote-code \
    --tool-call-parser gemma4 \
    --enable-auto-tool-choice \
    --attention-backend TRITON_ATTN

Key flags:

  • --tensor-parallel-size 2 — split across 2 GPUs
  • --kv-cache-dtype fp8 — reduce KV cache memory for longer contexts
  • --tool-call-parser gemma4 + --enable-auto-tool-choice — required for Gemma 4 tool calling
  • --max-num-seqs 2 — limit concurrent requests (prevents OOM on 16GB GPUs)

Verify the model is running:

curl http://localhost:8000/v1/models

Configure DryDock

# ~/.drydock/config.toml
[models.gemma4]
name = "gemma4"
provider = "generic-openai"
alias = "gemma4"
temperature = 0.2
thinking = "high"

[providers.generic-openai]
api_base = "http://localhost:8000/v1"
api_style = "openai"

Install

pip install drydock-cli

Or with uv:

uv tool install drydock-cli

Quick Start

cd your-project/
drydock

First run creates a config at ~/.drydock/config.toml and prompts for your provider setup.

> Review the PRD and build the package

Features

  • TUI Interface: Full terminal UI with streaming output, tool approval, and session management.
  • Adaptive Thinking: Automatically adjusts reasoning depth per turn — full thinking for planning, fast mode for file writes.
  • Powerful Toolset: Read, write, and patch files. Execute shell commands. Search code with grep. Delegate to subagents.
  • Project-Aware: Scans project structure, loads AGENTS.md / DRYDOCK.md for context.
  • Subagent Delegation: Large tasks can be delegated to builder/planner/explorer subagents with isolated context.
  • Loop Detection: Advisory-only detection that nudges the model away from repetitive actions without blocking.
  • Conda/Pip Support: Auto-approves pip install, conda install, pytest, and other dev commands.
  • Bundled Skills: Ships with skills like create-presentation for PowerPoint generation.
  • MCP Support: Connect Model Context Protocol servers for extended capabilities.
  • Safety First: Tool execution approval with --dangerously-skip-permissions for full auto-approve.

Built-in Agents

  • default: Standard agent that requires approval for tool executions.
  • plan: Read-only agent for exploration and planning.
  • accept-edits: Auto-approves file edits only.
  • auto-approve: Auto-approves all tool executions.
drydock --agent plan

Gemma 4 Optimizations

DryDock includes several optimizations specifically tuned for Gemma 4:

  • Simplified prompt (gemma4.md): 20-line system prompt instead of 125 lines. Complex prompts cause Gemma 4 to plan instead of act.
  • Non-streaming mode: Streaming breaks Gemma 4 tool call JSON parsing. DryDock automatically disables streaming for Gemma 4.
  • Thinking token filtering: Gemma 4 leaks <|channel>thought<channel|> tokens into text output. DryDock strips these before storing in context.
  • Adaptive thinking: Full thinking for planning (turn 1) and error recovery. Thinking OFF for routine file writes — eliminates 30-120s hangs between files.
  • search_replace resilience: Auto-detects already-applied edits, infers missing file paths, fuzzy-matches whitespace differences.
  • Reduced tool set: Disables tools that confuse Gemma 4 (ask_user_question, task_create, etc.).

Usage

Interactive Mode

drydock                        # Start interactive session
drydock "Fix the login bug"    # Start with a prompt
drydock --continue             # Resume last session
drydock --resume abc123        # Resume specific session

Keyboard shortcuts:

  • Ctrl+C — Cancel current operation (double-tap to quit)
  • Shift+Tab — Toggle auto-approve mode
  • Ctrl+O — Toggle tool output
  • Ctrl+G — Open external editor
  • @ — File path autocompletion
  • !command — Run shell command directly

Programmatic Mode

drydock --prompt "Analyze the codebase" --max-turns 5 --output json
drydock --dangerously-skip-permissions -p "Fix all lint errors"

Configuration

DryDock is configured via config.toml. It looks first in ./.drydock/config.toml, then ~/.drydock/config.toml.

API Key

drydock --setup                              # Interactive setup
export MISTRAL_API_KEY="your_key"            # Or set env var

Keys are saved to ~/.drydock/.env.

Consultant Model

Set a smarter model for the /consult command:

consultant_model = "gemini-2.5-pro"

The consultant provides read-only advice — it never calls tools. Use /consult <question> to ask it.

Custom Agents

Create agent configs in ~/.drydock/agents/:

# ~/.drydock/agents/redteam.toml
active_model = "devstral-2"
system_prompt_id = "redteam"
disabled_tools = ["search_replace", "write_file"]

Skills

DryDock discovers skills from:

  1. Custom paths in config.toml via skill_paths
  2. Project .drydock/skills/ or .agents/skills/
  3. Global ~/.drydock/skills/
  4. Bundled skills (shipped with the package)

MCP Servers

[[mcp_servers]]
name = "fetch_server"
transport = "stdio"
command = "uvx"
args = ["mcp-server-fetch"]

Testing

DryDock uses a shakedown harness (scripts/shakedown.py) that drives the real TUI via pexpect and judges on user-perceptible criteria — not tool-call counts.

# Single project test
python3 scripts/shakedown.py \
    --cwd /path/to/project \
    --prompt "review the PRD and build the package" \
    --pkg package_name

# Interactive back-and-forth test
python3 scripts/shakedown_interactive.py \
    --cwd /path/to/project \
    --pkg package_name

# Full regression suite (370 PRDs)
bash scripts/shakedown_suite.sh

Pass criteria: no write loops, no ignored interrupts, no search_replace cascades, package executes, session finishes within time budget.

Slash Commands

Type /help in the input for available commands. Create custom slash commands via the skills system.

Session Management

drydock --continue              # Continue last session
drydock --resume abc123         # Resume specific session
drydock --workdir /path/to/dir  # Set working directory

License

Copyright 2025 Mistral AI (original work) Copyright 2026 DryDock contributors (modifications)

Licensed under the Apache License, Version 2.0. See LICENSE for details.

DryDock is a fork of mistralai/mistral-vibe (Apache 2.0). See NOTICE for attribution.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

drydock_cli-2.6.52.tar.gz (783.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

drydock_cli-2.6.52-py3-none-any.whl (408.8 kB view details)

Uploaded Python 3

File details

Details for the file drydock_cli-2.6.52.tar.gz.

File metadata

  • Download URL: drydock_cli-2.6.52.tar.gz
  • Upload date:
  • Size: 783.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for drydock_cli-2.6.52.tar.gz
Algorithm Hash digest
SHA256 90ca102342d99472dc3ff92a47a87a8e3ad27ed023ef017407ddaf916387ca6d
MD5 1a1dfca99d92d5c519e8f16a9a80c643
BLAKE2b-256 7a3ddc686102a975c088825c33e77b5242aa35ca23cf0e2935d8d3e8cacce11d

See more details on using hashes here.

File details

Details for the file drydock_cli-2.6.52-py3-none-any.whl.

File metadata

  • Download URL: drydock_cli-2.6.52-py3-none-any.whl
  • Upload date:
  • Size: 408.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for drydock_cli-2.6.52-py3-none-any.whl
Algorithm Hash digest
SHA256 bd5453f98a89d7147451a33873bc018d902610bd928cdfed89f71c703a0ba128
MD5 681e0c7af115bf7572976e003e5f1008
BLAKE2b-256 1aa982d09a2a1771fcf023bce58caa024a67d3043136c0326d57c55fc8994ff9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page