Ollama CLI prompt tool for local LLM code analysis
Project description
ollama-prompt
Quick Start • Documentation • Use Cases • Contributing
What is ollama-prompt?
A lightweight Python CLI that transforms Ollama into a powerful analysis tool with:
- Session persistence - Multi-turn conversations with full context
- Structured JSON output - Token counts, timing, and metadata
- File references - Inline local files with
@filesyntax - Multi-agent orchestration - Perfect for subprocess workflows
Perfect for: Terminal AI assistants (Claude, Codex, Gemini CLI), subprocess orchestration, and cost-aware workflows
Primary Use Case: AI Agent Subprocess Integration
Built for terminal-based AI assistants: Claude, Codex, Gemini CLI, and other interactive AI tools.
When terminal AI agents need deep analysis but must preserve their context window, they delegate to ollama-prompt as a subprocess:
How Claude Uses This:
- Context Preservation - Claude delegates heavy analysis without consuming its 200K token budget
- Structured Parsing - JSON output with token counts, timing, and session IDs
- File Reference Chaining -
@filesyntax lets Claude reference multiple files in one call - Session Continuity - Multi-turn analysis without manual context management
Example Claude Code Workflow:
# Claude delegates codebase analysis to ollama-prompt
ollama-prompt --prompt "Analyze @./src/auth.py for security issues" \
--model deepseek-v3.1:671b-cloud \
> analysis.json
# Claude parses JSON response and continues with its own reasoning
Who Uses This:
- Primary: Terminal AI assistants (Claude, Codex, Gemini CLI, Cursor)
- Secondary: Python scripts orchestrating multi-agent workflows
- Advanced: Custom AGI systems with local Ollama backends
Learn More: Subprocess Best Practices | Architectural Comparison
Features
- Session Management - Persistent conversations across CLI invocations
- Rich Metadata - Full JSON output with token counts, timing, and cost tracking
- File References - Reference local files with
@./path/to/file.pysyntax - Directory Operations - List, tree view, and search with
@./dir/syntax (full read access within repo root) - Secure File Access - TOCTOU-safe operations with path validation via llm-fs-tools
- Subprocess-Friendly - Designed for agent orchestration and automation
- Cloud & Local Models - Works with both Ollama cloud models and local instances
- Cross-Platform - Windows, macOS, Linux with Python 3.10+
Quick Start
Prerequisites: Ollama CLI installed (server starts automatically)
# 1. Install
pip install ollama-prompt
# 2. First question (creates session automatically)
ollama-prompt --prompt "What is 2+2?"
# 3. Follow-up with context
ollama-prompt --session-id <id-from-output> --prompt "What about 3+3?"
Session created automatically! See session_id in output.
Next steps: 5-Minute Tutorial | Full CLI Reference
Installation
PyPI (Recommended)
pip install ollama-prompt
Development Install
git clone https://github.com/dansasser/ollama-prompt.git
cd ollama-prompt
pip install -e .
Prerequisites
- Python 3.10 or higher
- Ollama installed and running
- For cloud models:
ollama signin(one-time authentication)
Verify installation:
ollama-prompt --help
ollama list # Check available models
Full setup guide: Prerequisites Documentation
Usage
Basic Example
ollama-prompt --prompt "Explain Python decorators" \
--model deepseek-v3.1:671b-cloud
Multi-Turn Conversation
# First question
ollama-prompt --prompt "Who wrote Hamlet?" > out.json
# Follow-up (remembers context)
SESSION_ID=$(jq -r '.session_id' out.json)
ollama-prompt --session-id $SESSION_ID --prompt "When was he born?"
File Analysis
ollama-prompt --prompt "Review @./src/auth.py for security issues"
Directory Operations
Reference entire directories with the @./dir/ syntax:
# List directory contents
ollama-prompt --prompt "What's in @./src/?"
# Show directory tree
ollama-prompt --prompt "Show the structure: @./src/:tree"
# Search for pattern in files
ollama-prompt --prompt "Find TODO comments: @./src/:search:TODO"
⚠️ Security Note:
ollama-prompthas read access to all files and directories within the current working directory (repository root). File operations are TOCTOU-safe and validate paths to prevent traversal attacks, but the tool can read any accessible file. Only run in trusted directories.
Directory Syntax:
| Syntax | Description | Example |
|---|---|---|
@./dir/ |
List directory contents | @./src/ |
@./dir/:list |
Explicit list operation | @./src/:list |
@./dir/:tree |
Directory tree (depth=3) | @./src/:tree |
@./dir/:search:PATTERN |
Search for pattern | @./src/:search:TODO |
Stateless Mode
ollama-prompt --prompt "Quick question" --no-session
More examples: Use Cases Guide with 12 real-world scenarios
Documentation
Complete Documentation - Full guide navigation and reference
Quick Links:
Use Cases
Software Development:
- Multi-file code review with shared context
- Iterative debugging sessions
- Architecture analysis across modules
Multi-Agent Systems:
- Subprocess-based agent orchestration
- Context-aware analysis pipelines
- Cost tracking for LLM operations
Data Analysis:
- Sequential data exploration with memory
- Research workflows with source tracking
- Report generation with conversation history
See all 12 scenarios: Use Cases Guide
Why ollama-prompt?
ollama-prompt is a specialized tool. It's not a general-purpose CLI, but an automation-first subprocess designed to be called by other AI agents (like Claude or Gemini) to preserve their primary context window.
This table clarifies its niche compared to other common tools:
Comparison of Ollama Interaction Methods
| Feature | 1. Direct Ollama API | 2. llm (Simon Willison's Tool) |
3. ollama-prompt (This Tool) |
|---|---|---|---|
| Primary Use | Building custom applications (backend). | Human-facing CLI: A "workbench" for a user to interact with models. | Automation-facing CLI: A "subprocess" for other programs to call. |
| Interface | Raw HTTP / Code Library | Interactive, user-friendly CLI. | Single-line CLI command designed for scripts. |
| Key "Win" | Total flexibility. | Universality: Supports Ollama, OpenAI, Anthropic, etc. | Automation: Built to be called by other AI agents (like Claude/Gemini). |
| Output Format | Raw JSON response. | Plain text (default), with a flag for JSON (--json). |
Structured JSON (default), with rich metadata (tokens, time). |
| Session Memory | Manual: You must track and resend the entire message history. | Automatic: Logs all prompts/responses to a local SQLite DB. | Automatic (via flag): Uses --session-id to persist context. |
| File Handling | Manual: Requires code to read the file and inject its content. | Manual (via piping): Requires cat file.py | llm -s "..." |
Built-in: @./file.py for files, @./dir/ for directories (full read access). |
Built for:
- Terminal AI assistants (Claude, Codex, Gemini CLI) - Delegate analysis via subprocess
- Context preservation - Save your AI's token budget for reasoning
- Multi-agent systems - Orchestrate parallel analysis tasks
- Cost-aware workflows - Track token usage explicitly
Architecture: Subprocess Best Practices | Architectural Comparison
Troubleshooting
- If you get
ModuleNotFoundError: ollama, ensure you ranpip install ollamain the correct Python environment. - Ensure Ollama CLI is installed (
ollama --versionshould work). The server starts automatically when needed. - For maximum context windows, check your model's max token support.
- Unexpected session_id in output? Sessions are auto-created by default in v1.2.0+. This is normal behavior. Use
--no-sessionfor stateless operation. - Session context not persisting? Ensure you're using the same
--session-idvalue across invocations. Use--list-sessionsto see available sessions.
Contributing
We welcome contributions! Here's how to get started:
Development Setup:
git clone https://github.com/dansasser/ollama-prompt.git
cd ollama-prompt
pip install -e .
Running Tests:
pytest
Contribution Guidelines:
- Fork the repo and create a branch
- Write tests for new features
- Follow existing code style
- Submit PR with clear description
Areas We Need Help:
- Documentation improvements
- New use case examples
- Bug reports and fixes
- Feature suggestions
Questions? Open an issue or discussion.
Community & Support
- Bug Reports: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/README.md
- Troubleshooting: Reference Guide
License
MIT License - see LICENSE file for details.
Third-Party Licenses:
- Uses Ollama (separate licensing)
Credits
Author: Daniel T. Sasser II
- GitHub: github.com/dansasser
- Blog: dansasser.me
Built With:
Acknowledgments:
- Inspired by the need for structured, cost-aware LLM workflows
- Built for the AI agent orchestration community
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ollama_prompt-1.2.0.tar.gz.
File metadata
- Download URL: ollama_prompt-1.2.0.tar.gz
- Upload date:
- Size: 35.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
958fff1e9c233f84fc80088e6ea7c84fbc83b5649eca3b68989cec95d48739b5
|
|
| MD5 |
dba1ce699469a6e52f5c112330b55b3f
|
|
| BLAKE2b-256 |
e7b5e1c875b3da0e3d9d7f726ab442e6027ac74fa7d5fec3d2e93f3a7e8ef573
|
Provenance
The following attestation bundles were made for ollama_prompt-1.2.0.tar.gz:
Publisher:
publish.yml on dansasser/ollama-prompt
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ollama_prompt-1.2.0.tar.gz -
Subject digest:
958fff1e9c233f84fc80088e6ea7c84fbc83b5649eca3b68989cec95d48739b5 - Sigstore transparency entry: 748364531
- Sigstore integration time:
-
Permalink:
dansasser/ollama-prompt@416c5874792096d2989fd40a96755018a870b476 -
Branch / Tag:
refs/tags/V1.2.0 - Owner: https://github.com/dansasser
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@416c5874792096d2989fd40a96755018a870b476 -
Trigger Event:
release
-
Statement type:
File details
Details for the file ollama_prompt-1.2.0-py3-none-any.whl.
File metadata
- Download URL: ollama_prompt-1.2.0-py3-none-any.whl
- Upload date:
- Size: 36.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62ff316914c68cc20caefdba23f617f4d1b57b20de2da9f338bb6c4c81b56db7
|
|
| MD5 |
9daf9319240ad30d5fed77c3b1133c9a
|
|
| BLAKE2b-256 |
6943cd5a4700eb25cb6fb1c9562d7e7c0cff22df88e5c7a104843941e16fff62
|
Provenance
The following attestation bundles were made for ollama_prompt-1.2.0-py3-none-any.whl:
Publisher:
publish.yml on dansasser/ollama-prompt
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ollama_prompt-1.2.0-py3-none-any.whl -
Subject digest:
62ff316914c68cc20caefdba23f617f4d1b57b20de2da9f338bb6c4c81b56db7 - Sigstore transparency entry: 748364536
- Sigstore integration time:
-
Permalink:
dansasser/ollama-prompt@416c5874792096d2989fd40a96755018a870b476 -
Branch / Tag:
refs/tags/V1.2.0 - Owner: https://github.com/dansasser
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@416c5874792096d2989fd40a96755018a870b476 -
Trigger Event:
release
-
Statement type: