Skip to main content

A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more

Project description

ollama-agentic

A beautiful, agentic terminal interface for Ollama — run local LLMs with auto tool-calling, long-term memory, iterative code debugging, and more.

Python License PyPI

Install

pip install ollama-agentic
ollama-cli

Ollama is installed and started automatically if not already present.


Features

  • Auto mode — model autonomously calls tools to complete tasks (/auto)
  • 🔁 Iterative debug loop/run file.py auto-fixes errors until code passes
  • 📋 Plan executor/plan <goal> breaks goals into typed steps and executes them
  • 🧠 Long-term memory/remember stores facts that persist across sessions
  • 📦 Auto-installs Ollama — detects if Ollama is missing and installs it for you
  • 🚀 Auto-starts Ollama — spins up ollama serve automatically if not running
  • ⬇️ Arrow-key model picker/install lets you browse and download 25+ models
  • 🔧 Agent tools/shell, /file, /fetch, /ls inject real context into chats
  • 💾 Conversation saving/save and /load persist chats as JSON
  • 🎭 Personas — save and load system prompt presets
  • 🆚 Compare mode — run the same prompt through two models side by side

Usage

ollama-cli                       # start chatting
ollama-cli --model qwen2.5:7b    # start with a specific model
ollama-cli --auto                # start in autonomous agent mode
ollama-cli --compare             # compare two models side by side

Commands

Chat & Navigation

Command Description
/cls Clear screen (keep context)
/clear Clear conversation and screen
Ctrl+L Clear screen
/retry Regenerate last response
/tokens Toggle token count display

Models

Command Description
/model Switch active model (arrow-key picker)
/current Show currently active model
/install Browse & install models from catalogue
/models List all installed models
/compare Compare two models side by side

Agentic

Command Description
/auto Toggle autonomous tool-calling mode
/plan <goal> Break a goal into steps and execute
/run <file.py> Run code, auto-fix errors in a loop

Memory

Command Description
/remember <fact> Store a fact in long-term memory
/memories List all stored memories
/forget <id> Delete a memory by ID

Context Injection

Command Description
/file <path> Load a file into context
/shell <cmd> Run a shell command, inject output
/fetch <url> Fetch a webpage into context
/ls <path> Inject a directory listing
/context View or clear active injections

Conversations & Personas

Command Description
/save <n> Save conversation
/load <n> Load conversation
/list List saved conversations
/system <prompt> Set a system prompt
/persona <n> Load a saved persona
/personas List saved personas
/save-persona <n> Save current system prompt as persona

Agent Mode

Toggle with /auto or launch with --auto. In auto mode the model can call tools, read results, and loop until the task is done — no manual /file or /shell needed.

⚡ you › look at main.py and find any bugs
⚡ you › write a web scraper for hacker news and run it
⚡ you › set up a basic Flask app in this folder

Config & Data

All config and data is stored in your home directory:

Path Description
~/.ollama_cli_config.json Settings (model, auto mode, etc)
~/.ollama_cli_history Input history
~/.ollama_cli_memory.json Long-term memories
~/.ollama_cli_saves/ Saved conversations
~/.ollama_cli_personas/ Saved personas

Requirements

  • Python 3.10+
  • macOS, Linux, or Windows
  • Ollama (handled automatically on first run)

Roadmap

  • MCP server — expose tools to Claude Code, Cursor, and other agents
  • Repo-aware context — auto-index codebase on launch from a project folder
  • Git tools — /diff, /commit, /log
  • API key integrations — Claude, OpenAI, Gemini, Groq as model backends
  • Symbol search across codebase

Contributing

PRs and issues welcome at github.com/Akhil123454321/ollama-cli. Keep changes focused and include tests where appropriate.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_agentic-0.1.1.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_agentic-0.1.1-py3-none-any.whl (4.0 kB view details)

Uploaded Python 3

File details

Details for the file ollama_agentic-0.1.1.tar.gz.

File metadata

  • Download URL: ollama_agentic-0.1.1.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.1.tar.gz
Algorithm Hash digest
SHA256 07e830807e2681c9009aba8cd694f3ed91fe042e62a5d911aa17f9dc3bc3805b
MD5 5d3a366de98de61f34c464be03a12e48
BLAKE2b-256 caf6c4296d0e9d4f6a427b5de840381ac7b90b058121c90c2a08f3b9ffecd89d

See more details on using hashes here.

File details

Details for the file ollama_agentic-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: ollama_agentic-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 4.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for ollama_agentic-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fcf69f8c17389934f012da4bdf7d1d8975207b750696d1426bf13370b32e20b2
MD5 b4a7603b0b6243e90991faae2d5daccc
BLAKE2b-256 a03c679c6d11f170b05de313caa5dd22d91aa8a2fb9d8a1be46bb0dee033a61c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page