Skip to main content

A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more

Project description

ollama-agentic

A beautiful, agentic terminal interface for Ollama — run local LLMs with auto tool-calling, long-term memory, iterative code debugging, and more.

Python License PyPI

Install

pip install ollama-agentic
ollama-cli

Ollama is installed and started automatically if not already present.


Features

  • Auto mode — model autonomously calls tools to complete tasks (/auto)
  • 🔁 Iterative debug loop/run file.py auto-fixes errors until code passes
  • 📋 Plan executor/plan <goal> breaks goals into typed steps and executes them
  • 🧠 Long-term memory/remember stores facts that persist across sessions
  • 📦 Auto-installs Ollama — detects if Ollama is missing and installs it for you
  • 🚀 Auto-starts Ollama — spins up ollama serve automatically if not running
  • ⬇️ Arrow-key model picker/install lets you browse and download 25+ models
  • 🔧 Agent tools/shell, /file, /fetch, /ls inject real context into chats
  • 💾 Conversation saving/save and /load persist chats as JSON
  • 🎭 Personas — save and load system prompt presets
  • 🆚 Compare mode — run the same prompt through two models side by side

Usage

ollama-cli                       # start chatting
ollama-cli --model qwen2.5:7b    # start with a specific model
ollama-cli --auto                # start in autonomous agent mode
ollama-cli --compare             # compare two models side by side

Commands

Chat & Navigation

Command Description
/cls Clear screen (keep context)
/clear Clear conversation and screen
Ctrl+L Clear screen
/retry Regenerate last response
/tokens Toggle token count display

Models

Command Description
/model Switch active model (arrow-key picker)
/current Show currently active model
/install Browse & install models from catalogue
/models List all installed models
/compare Compare two models side by side

Agentic

Command Description
/auto Toggle autonomous tool-calling mode
/plan <goal> Break a goal into steps and execute
/run <file.py> Run code, auto-fix errors in a loop

Memory

Command Description
/remember <fact> Store a fact in long-term memory
/memories List all stored memories
/forget <id> Delete a memory by ID

Context Injection

Command Description
/file <path> Load a file into context
/shell <cmd> Run a shell command, inject output
/fetch <url> Fetch a webpage into context
/ls <path> Inject a directory listing
/context View or clear active injections

Conversations & Personas

Command Description
/save <n> Save conversation
/load <n> Load conversation
/list List saved conversations
/system <prompt> Set a system prompt
/persona <n> Load a saved persona
/personas List saved personas
/save-persona <n> Save current system prompt as persona

Agent Mode

Toggle with /auto or launch with --auto. In auto mode the model can call tools, read results, and loop until the task is done — no manual /file or /shell needed.

⚡ you › look at main.py and find any bugs
⚡ you › write a web scraper for hacker news and run it
⚡ you › set up a basic Flask app in this folder

Config & Data

All config and data is stored in your home directory:

Path Description
~/.ollama_cli_config.json Settings (model, auto mode, etc)
~/.ollama_cli_history Input history
~/.ollama_cli_memory.json Long-term memories
~/.ollama_cli_saves/ Saved conversations
~/.ollama_cli_personas/ Saved personas

Requirements

  • Python 3.10+
  • macOS, Linux, or Windows
  • Ollama (handled automatically on first run)

Roadmap

  • MCP server — expose tools to Claude Code, Cursor, and other agents
  • Repo-aware context — auto-index codebase on launch from a project folder
  • Git tools — /diff, /commit, /log
  • API key integrations — Claude, OpenAI, Gemini, Groq as model backends
  • Symbol search across codebase

Contributing

PRs and issues welcome at github.com/Akhil123454321/ollama-cli. Keep changes focused and include tests where appropriate.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_agentic-0.1.8.tar.gz (18.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_agentic-0.1.8-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file ollama_agentic-0.1.8.tar.gz.

File metadata

  • Download URL: ollama_agentic-0.1.8.tar.gz
  • Upload date:
  • Size: 18.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.8.tar.gz
Algorithm Hash digest
SHA256 2bcc4e9ac6414cc3aaa99aa10b652d96bf52e2505a0f5c2e816bc361013d534b
MD5 82117572ca219cf3e80187ebb33ae360
BLAKE2b-256 f0aa2907695ebdb7bfed846a2173048f091ac0a62f8af0d584ab02ea5d3044f7

See more details on using hashes here.

File details

Details for the file ollama_agentic-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: ollama_agentic-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 4f1e306ea7c3b0866e0daa72a96288d234d3905ec64945e63d4ee567e0dc61bc
MD5 1fee340db7be810a1ab97834ecc5ade5
BLAKE2b-256 1bf73a196a4ea4a1c598333dd628770381f9245ef5994a4126a9c15f617621ae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page