Local-first, terminal-native agentic coding assistant powered by Ollama
Project description
Mita Code
A local-first, terminal-native agentic coding assistant that runs LLMs entirely on your machine via Ollama. No API keys. No cloud. No telemetry.
Features
- 100% Local — All inference runs on your hardware via Ollama. Your code never leaves your machine.
- Agentic Tool Loop — Read/write files, run shell commands, git operations — with confirmation for destructive actions.
- Hardware-Aware Model Recommendations — Detects your RAM, VRAM, and GPU to recommend models that will actually run well.
- MCP Plugin System — Compatible with the existing Model Context Protocol ecosystem (stdio and SSE transport).
- Skills — Reusable, parameterized prompt templates stored as Markdown files (e.g.,
/commit,/review). - Layered Memory —
MITA.mdfiles at global, project, and directory scope are automatically injected into context. - Layered Config — TOML configuration cascades from global to project level.
- Hooks — Lifecycle shell commands that fire on events like file writes or tool calls.
- Codebase Indexing — Local vector search (LanceDB + Tree-sitter) for RAG over your codebase.
- Unix Philosophy — Composable, pipeable, scriptable.
Requirements
- Python 3.11+
- Ollama installed and running
Installation
pipx install mita-code
Or for development:
git clone https://github.com/jtdub/mita-code.git
cd mita-code
poetry install
Quick Start
# Start Ollama (if not already running)
ollama serve
# Pull a coding model
mita models pull qwen2.5-coder:7b
# Start an interactive session
mita chat
# Or ask a single question
mita ask "explain the auth module in this project"
Usage
Interactive Chat
mita chat # Start agentic chat session
mita chat --model deepseek-coder-v2:16b # Use a specific model
mita chat --no-tools # Pure chat, no tool execution
Single-Shot Prompts
mita ask "refactor this function to use async"
cat error.log | mita ask "what went wrong?"
Model Management
mita models recommend # See what fits your hardware
mita models list # List installed models
mita models pull qwen2.5-coder:14b # Pull a model
mita models default qwen2.5-coder:14b # Set as default
Memory
mita memory show # View all active memory
mita memory add "Always use pytest" --project # Add project-level memory
mita memory edit # Edit nearest MITA.md
Codebase Indexing
mita index build # Index the current project
mita index search "database connection" # Search the index
Skills
mita skills list # List available skills
# In chat, use /skill_name to invoke:
# mita> /commit
# mita> /review
Plugins (MCP)
mita plugins add filesystem --command "npx @modelcontextprotocol/server-filesystem ."
mita plugins list # List plugins and their tools
Configuration
mita config show # Show merged configuration
mita config edit --global # Edit global config
mita config set model.default "qwen2.5-coder:14b"
Diagnostics
mita doctor # Check Ollama, models, config health
Configuration
Global config lives at ~/.config/mita/config.toml. Project-level overrides go in .mita/settings.toml.
[model]
default = "qwen2.5-coder:7b"
temperature = 0.1
[tools]
auto_approve = ["file_read", "glob", "grep"]
confirm_destructive = true
[index]
enabled = true
top_k = 10
See PLANNING.md for the full configuration schema.
Memory System
Mita uses layered MITA.md files that are automatically discovered and injected into context:
| Scope | Location | Purpose |
|---|---|---|
| Global | ~/.config/mita/MITA.md |
Preferences across all projects |
| Project | <project_root>/MITA.md |
Project-specific conventions |
| Directory | <subdir>/MITA.md |
Directory-specific context |
Higher-specificity files take priority. Each file is capped at 200 lines.
Tech Stack
| Component | Library |
|---|---|
| CLI | Typer |
| Terminal UI | Rich |
| LLM Runtime | Ollama |
| LLM Client | LiteLLM |
| Structured Output | Instructor |
| Vector Store | LanceDB |
| Code Parsing | Tree-sitter |
| Config | TOML (stdlib tomllib) |
| Plugins | MCP |
Contributing
See PLANNING.md for the full project plan, architecture, and build phases.
Full documentation is available at mita-code.readthedocs.io.
License
Apache 2.0 — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mita_code-0.1.0.tar.gz.
File metadata
- Download URL: mita_code-0.1.0.tar.gz
- Upload date:
- Size: 61.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.14 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b69f0099c489375b60d8e2e097acf5ba305af47cb224e64eb066f8575e6de66
|
|
| MD5 |
32fa5edb2974870dceb2a525ea9a5144
|
|
| BLAKE2b-256 |
b53544621c781fc881c85caa6438aa1f38122f7a30c7ef1e9faed227ad9572cd
|
File details
Details for the file mita_code-0.1.0-py3-none-any.whl.
File metadata
- Download URL: mita_code-0.1.0-py3-none-any.whl
- Upload date:
- Size: 83.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.14 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87cbb09bf12c63e40c8e358572b2fbe636141548406aff9f82eee2d7bd48f4fc
|
|
| MD5 |
7d6d2d38af23ee100750646133156868
|
|
| BLAKE2b-256 |
1ee38967454e1e8116aebcec9f720779bde6509a8ba46797be2012d3cbf81d3c
|