Skip to main content

Remote CLI client for Ollama servers - Network-ready chat interface with history tracking and inference control

Project description

Ollama Chat CLI

License: MIT Python 3.7+

A feature-rich command-line interface for Ollama with chat history, inference settings, and beautiful UI.

Features

  • Beautiful Claude Code-style interface
  • Chat history with session management
  • Configurable inference settings (temperature, top_p, context window, etc.)
  • Real-time context tracking and token usage
  • Easy model switching and management
  • Search through chat history
  • Support for local and remote Ollama servers
  • Secure configuration with .env file

Requirements

  • Python 3.7+
  • Ollama installed and running
  • pip (Python package manager)

Quick Start

1. Clone the Repository

git clone https://github.com/yourusername/ollama-chat-cli.git
cd ollama-chat-cli

2. Install Dependencies

pip install -r requirements.txt

Or if using the package:

pip install -e .

3. Configure Your Connection

Copy the example environment file:

cp .env.example .env

Edit .env with your preferred connection method (see below).

4. Run the CLI

python ollama_chat/cli.py

Or if installed as a package:

ollama-chat
# or the short alias:
oc

Configuration Methods

Method 1: Local Connection (Default)

If Ollama is running on the same machine:

OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2

Setup:

  1. Install Ollama from ollama.ai
  2. Run: ollama serve
  3. Pull a model: ollama pull llama2

Method 2: Hostname Connection (.local / mDNS)

For connecting to another computer on your local network using its hostname:

OLLAMA_HOST=http://my-computer.local:11434
OLLAMA_MODEL=gemma3:12b

Setup:

  1. Find your computer's hostname:

    • Windows: hostname in CMD
    • Mac: System Preferences → Sharing → Computer Name
    • Linux: hostname in terminal
  2. Ensure mDNS/Bonjour is working:

  3. Test connection: ping my-computer.local

  4. Configure firewall to allow port 11434

Common hostnames:

  • desktop.local
  • macbook.local
  • xav-pcx.local
  • server.local

Method 3: Static IP Address

If your computer has a fixed IP on your network:

OLLAMA_HOST=http://192.168.1.100:11434
OLLAMA_MODEL=mistral

Setup:

  1. Set a static IP on your Ollama server machine

  2. Find your IP address:

    • Windows: ipconfig → IPv4 Address
    • Mac: ifconfig en0 | grep inet
    • Linux: ip addr show
  3. Configure firewall to allow port 11434

  4. Test: Open browser to http://192.168.1.100:11434 (should show "Ollama is running")


Method 4: Dynamic IP (Current Session)

For temporary connections when IP changes:

OLLAMA_HOST=http://192.168.1.XXX:11434
OLLAMA_MODEL=llama2

Setup:

  1. Find current IP (see Method 3)
  2. Update .env each time IP changes
  3. Consider using hostname (Method 2) instead for permanent setup

Method 5: Remote Server

For connecting to Ollama on a remote server:

OLLAMA_HOST=https://ollama.myserver.com
OLLAMA_MODEL=llama3.3

Setup:

  1. Set up Ollama on remote server
  2. Configure reverse proxy (nginx/caddy) with SSL
  3. Open firewall port (11434 or custom)
  4. Test connection: curl https://ollama.myserver.com/api/tags

Method 6: Docker

If running Ollama in Docker:

OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2

Setup:

docker run -d -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama pull llama2

Method 7: WSL (Windows Subsystem for Linux)

Accessing Windows host from WSL:

OLLAMA_HOST=http://host.docker.internal:11434
OLLAMA_MODEL=llama2

Or find Windows IP from WSL:

ip route | grep default | awk '{print $3}'

Available Commands

Command Description
/help Show all available commands
/models List available models
/switch Switch to a different model
/pull Download a new model
/delete Delete a model
/host Change Ollama host URL
/config Show current configuration
/settings Configure inference settings (temperature, etc.)
/modelinfo Show detailed model information
/history View chat history
/search Search chat history
/clear Clear conversation context
/new Start a new chat session
/multi Enter multi-line input mode
/exit Exit the chat

Inference Settings

Configure AI behavior with /settings:

  • Temperature (0.0-2.0): Controls creativity

    • 0.1-0.3 for coding/math
    • 0.6-0.8 for general chat
    • 1.0-1.5 for creative writing
  • Top P (0.0-1.0): Nucleus sampling (default: 0.9)

  • Top K (1-100): Token choice limits (default: 40)

  • Context Window (128-32768): Conversation memory (default: 2048)

  • Max Output (1-4096): Response length (default: 512)

Troubleshooting

"Could not connect to Ollama"

  1. Verify Ollama is running:

    curl http://localhost:11434
    # Should return: "Ollama is running"
    
  2. Check firewall settings:

    • Windows: Allow port 11434 in Windows Firewall
    • Mac: System Preferences → Security → Firewall Options
    • Linux: sudo ufw allow 11434
  3. For hostname issues:

    # Test if hostname resolves
    ping your-hostname.local
    
    # If it fails, use IP address instead
    
  4. For Docker:

    docker ps  # Verify container is running
    docker logs ollama  # Check logs
    

"Model not found"

# List available models
ollama list

# Pull the model you want
ollama pull llama2

"Permission denied"

# Make sure config directory is writable
chmod 755 ~/.ollama_chat_config.json

File Structure

ollama-chat-cli/
├── ollama_chat/
│   ├── __init__.py
│   ├── __main__.py
│   ├── cli.py           # Main CLI application
│   ├── ui.py            # UI components and banner
│   ├── api.py           # Ollama API client
│   ├── config.py        # Configuration management
│   ├── commands.py      # Command handlers
│   └── history.py       # Chat history manager
├── .env.example         # Example configuration
├── .gitignore          
├── setup.py            
├── requirements.txt    
└── README.md

Security Notes

  • .env file is gitignored and won't be committed
  • Config files are stored in your home directory
  • Never commit your actual .env file to Git
  • For remote connections, use HTTPS with proper SSL certificates

License

MIT License - feel free to use and modify!

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

If you encounter any issues, please open an issue on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_remote_chat_cli-0.1.1.tar.gz (20.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_remote_chat_cli-0.1.1-py3-none-any.whl (18.4 kB view details)

Uploaded Python 3

File details

Details for the file ollama_remote_chat_cli-0.1.1.tar.gz.

File metadata

  • Download URL: ollama_remote_chat_cli-0.1.1.tar.gz
  • Upload date:
  • Size: 20.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.5

File hashes

Hashes for ollama_remote_chat_cli-0.1.1.tar.gz
Algorithm Hash digest
SHA256 aea7bc01c6613f50c33022aa6b9ca2ba7bbb3f0befbbf21b0bb1592c664ae660
MD5 299b0c82ae5869e812a0a287f36b5edc
BLAKE2b-256 1735c64566b99c1bac6cca5ba5fca1fb3c3554f93912350360840443c9679a39

See more details on using hashes here.

File details

Details for the file ollama_remote_chat_cli-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ollama_remote_chat_cli-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cfbec923679181fc72af0014195a1c134a2192d23b387fe4b7a9de479a8ff607
MD5 b27538673edd4a1aa02321838299f5b7
BLAKE2b-256 91d93e7d1bb3be038911a58290937e53330fc423e890d5f266f046d192c14356

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page