Skip to main content

A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more

Project description

ollama-agentic

A beautiful, agentic terminal interface for Ollama — run local LLMs with auto tool-calling, long-term memory, iterative code debugging, and more.

Python License PyPI

Install

pip install ollama-agentic
ollama-cli

Ollama is installed and started automatically if not already present.


Features

  • Auto mode — model autonomously calls tools to complete tasks (/auto)
  • 🔁 Iterative debug loop/run file.py auto-fixes errors until code passes
  • 📋 Plan executor/plan <goal> breaks goals into typed steps and executes them
  • 🧠 Long-term memory/remember stores facts that persist across sessions
  • 📦 Auto-installs Ollama — detects if Ollama is missing and installs it for you
  • 🚀 Auto-starts Ollama — spins up ollama serve automatically if not running
  • ⬇️ Arrow-key model picker/install lets you browse and download 25+ models
  • 🔧 Agent tools/shell, /file, /fetch, /ls inject real context into chats
  • 💾 Conversation saving/save and /load persist chats as JSON
  • 🎭 Personas — save and load system prompt presets
  • 🆚 Compare mode — run the same prompt through two models side by side

Usage

ollama-cli                       # start chatting
ollama-cli --model qwen2.5:7b    # start with a specific model
ollama-cli --auto                # start in autonomous agent mode
ollama-cli --compare             # compare two models side by side

Commands

Chat & Navigation

Command Description
/cls Clear screen (keep context)
/clear Clear conversation and screen
Ctrl+L Clear screen
/retry Regenerate last response
/tokens Toggle token count display

Models

Command Description
/model Switch active model (arrow-key picker)
/current Show currently active model
/install Browse & install models from catalogue
/models List all installed models
/compare Compare two models side by side

Agentic

Command Description
/auto Toggle autonomous tool-calling mode
/plan <goal> Break a goal into steps and execute
/run <file.py> Run code, auto-fix errors in a loop

Memory

Command Description
/remember <fact> Store a fact in long-term memory
/memories List all stored memories
/forget <id> Delete a memory by ID

Context Injection

Command Description
/file <path> Load a file into context
/shell <cmd> Run a shell command, inject output
/fetch <url> Fetch a webpage into context
/ls <path> Inject a directory listing
/context View or clear active injections

Conversations & Personas

Command Description
/save <n> Save conversation
/load <n> Load conversation
/list List saved conversations
/system <prompt> Set a system prompt
/persona <n> Load a saved persona
/personas List saved personas
/save-persona <n> Save current system prompt as persona

Agent Mode

Toggle with /auto or launch with --auto. In auto mode the model can call tools, read results, and loop until the task is done — no manual /file or /shell needed.

⚡ you › look at main.py and find any bugs
⚡ you › write a web scraper for hacker news and run it
⚡ you › set up a basic Flask app in this folder

Config & Data

All config and data is stored in your home directory:

Path Description
~/.ollama_cli_config.json Settings (model, auto mode, etc)
~/.ollama_cli_history Input history
~/.ollama_cli_memory.json Long-term memories
~/.ollama_cli_saves/ Saved conversations
~/.ollama_cli_personas/ Saved personas

Requirements

  • Python 3.10+
  • macOS, Linux, or Windows
  • Ollama (handled automatically on first run)

Roadmap

  • MCP server — expose tools to Claude Code, Cursor, and other agents
  • Repo-aware context — auto-index codebase on launch from a project folder
  • Git tools — /diff, /commit, /log
  • API key integrations — Claude, OpenAI, Gemini, Groq as model backends
  • Symbol search across codebase

Contributing

PRs and issues welcome at github.com/Akhil123454321/ollama-cli. Keep changes focused and include tests where appropriate.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_agentic-0.1.6.tar.gz (18.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_agentic-0.1.6-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file ollama_agentic-0.1.6.tar.gz.

File metadata

  • Download URL: ollama_agentic-0.1.6.tar.gz
  • Upload date:
  • Size: 18.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for ollama_agentic-0.1.6.tar.gz
Algorithm Hash digest
SHA256 7b3dd16910ae18302f200b31ff81823547d191f548f390d12a40d02f1efb08ab
MD5 4529b17c60e37281bacd0fdab3e36362
BLAKE2b-256 e769914743897be6ad3f702bdf2f0b90bb2a95ebe356bbef7c94a4affef669e9

See more details on using hashes here.

File details

Details for the file ollama_agentic-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: ollama_agentic-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for ollama_agentic-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 0f2383cea64014bb15bb0b541467b3972b8243b599aa7abea0b925887b0edf44
MD5 e2705e7332d9aae7eed4c3275567f8b0
BLAKE2b-256 b16328bc861e2957b0eba39497c7a84b42bce0929c58759ba63ee08992da23db

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page