Skip to main content

A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more

Project description

ollama-agentic

A beautiful, agentic terminal interface for Ollama — run local LLMs with auto tool-calling, long-term memory, iterative code debugging, and more.

Python License PyPI

Install

pip install ollama-agentic
ollama-cli

Ollama is installed and started automatically if not already present.


Features

  • Auto mode — model autonomously calls tools to complete tasks (/auto)
  • 🔁 Iterative debug loop/run file.py auto-fixes errors until code passes
  • 📋 Plan executor/plan <goal> breaks goals into typed steps and executes them
  • 🧠 Long-term memory/remember stores facts that persist across sessions
  • 📦 Auto-installs Ollama — detects if Ollama is missing and installs it for you
  • 🚀 Auto-starts Ollama — spins up ollama serve automatically if not running
  • ⬇️ Arrow-key model picker/install lets you browse and download 25+ models
  • 🔧 Agent tools/shell, /file, /fetch, /ls inject real context into chats
  • 💾 Conversation saving/save and /load persist chats as JSON
  • 🎭 Personas — save and load system prompt presets
  • 🆚 Compare mode — run the same prompt through two models side by side

Usage

ollama-cli                       # start chatting
ollama-cli --model qwen2.5:7b    # start with a specific model
ollama-cli --auto                # start in autonomous agent mode
ollama-cli --compare             # compare two models side by side

Commands

Chat & Navigation

Command Description
/cls Clear screen (keep context)
/clear Clear conversation and screen
Ctrl+L Clear screen
/retry Regenerate last response
/tokens Toggle token count display

Models

Command Description
/model Switch active model (arrow-key picker)
/current Show currently active model
/install Browse & install models from catalogue
/models List all installed models
/compare Compare two models side by side

Agentic

Command Description
/auto Toggle autonomous tool-calling mode
/plan <goal> Break a goal into steps and execute
/run <file.py> Run code, auto-fix errors in a loop

Memory

Command Description
/remember <fact> Store a fact in long-term memory
/memories List all stored memories
/forget <id> Delete a memory by ID

Context Injection

Command Description
/file <path> Load a file into context
/shell <cmd> Run a shell command, inject output
/fetch <url> Fetch a webpage into context
/ls <path> Inject a directory listing
/context View or clear active injections

Conversations & Personas

Command Description
/save <n> Save conversation
/load <n> Load conversation
/list List saved conversations
/system <prompt> Set a system prompt
/persona <n> Load a saved persona
/personas List saved personas
/save-persona <n> Save current system prompt as persona

Agent Mode

Toggle with /auto or launch with --auto. In auto mode the model can call tools, read results, and loop until the task is done — no manual /file or /shell needed.

⚡ you › look at main.py and find any bugs
⚡ you › write a web scraper for hacker news and run it
⚡ you › set up a basic Flask app in this folder

Config & Data

All config and data is stored in your home directory:

Path Description
~/.ollama_cli_config.json Settings (model, auto mode, etc)
~/.ollama_cli_history Input history
~/.ollama_cli_memory.json Long-term memories
~/.ollama_cli_saves/ Saved conversations
~/.ollama_cli_personas/ Saved personas

Requirements

  • Python 3.10+
  • macOS, Linux, or Windows
  • Ollama (handled automatically on first run)

Roadmap

  • MCP server — expose tools to Claude Code, Cursor, and other agents
  • Repo-aware context — auto-index codebase on launch from a project folder
  • Git tools — /diff, /commit, /log
  • API key integrations — Claude, OpenAI, Gemini, Groq as model backends
  • Symbol search across codebase

Contributing

PRs and issues welcome at github.com/Akhil123454321/ollama-cli. Keep changes focused and include tests where appropriate.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_agentic-0.1.3.tar.gz (18.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_agentic-0.1.3-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file ollama_agentic-0.1.3.tar.gz.

File metadata

  • Download URL: ollama_agentic-0.1.3.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.3.tar.gz
Algorithm Hash digest
SHA256 4fd1057cc68c5040f22e0c690a5134a6ae7a30f8f4a1b2aaba2349edd3497937
MD5 6713d5b0d268cec2b3786408eb4b4575
BLAKE2b-256 907d1fe98a7e455402a9d10664d8a18321f5f1919484b8b4cdcc887307823a0c

See more details on using hashes here.

File details

Details for the file ollama_agentic-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: ollama_agentic-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 81fb5c7ba204ad6d7f7f5a2d54057c1a9e70b96dbb8a8232d0f959da330556e3
MD5 46a407e6fd4c61b99998f1d79a052100
BLAKE2b-256 309f1bc7b6b2e27328072e5ffd566fd6fdcd6a89a811cf58b69b05b0c118d156

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page