Remote CLI client for Ollama servers - Network-ready chat interface with history tracking and inference control
Project description
Ollama Chat CLI
A feature-rich command-line interface for Ollama with chat history, inference settings, and beautiful UI.
Features
- Beautiful Claude Code-style interface
- Chat history with session management
- Configurable inference settings (temperature, top_p, context window, etc.)
- Real-time context tracking and token usage
- Easy model switching and management
- Search through chat history
- Support for local and remote Ollama servers
- Secure configuration with .env file
Requirements
- Python 3.7+
- Ollama installed and running
- pip (Python package manager)
Quick Start
1. Clone the Repository
git clone https://github.com/yourusername/ollama-chat-cli.git
cd ollama-chat-cli
2. Install Dependencies
pip install -r requirements.txt
Or if using the package:
pip install -e .
3. Configure Your Connection
Copy the example environment file:
cp .env.example .env
Edit .env with your preferred connection method (see below).
4. Run the CLI
python ollama_chat/cli.py
Or if installed as a package:
ollama-chat
# or the short alias:
oc
Configuration Methods
Method 1: Local Connection (Default)
If Ollama is running on the same machine:
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2
Setup:
- Install Ollama from ollama.ai
- Run:
ollama serve - Pull a model:
ollama pull llama2
Method 2: Hostname Connection (.local / mDNS)
For connecting to another computer on your local network using its hostname:
OLLAMA_HOST=http://my-computer.local:11434
OLLAMA_MODEL=gemma3:12b
Setup:
-
Find your computer's hostname:
- Windows:
hostnamein CMD - Mac: System Preferences → Sharing → Computer Name
- Linux:
hostnamein terminal
- Windows:
-
Ensure mDNS/Bonjour is working:
- Windows: Install Bonjour Print Services
- Mac/Linux: Built-in
-
Test connection:
ping my-computer.local -
Configure firewall to allow port 11434
Common hostnames:
desktop.localmacbook.localxav-pcx.localserver.local
Method 3: Static IP Address
If your computer has a fixed IP on your network:
OLLAMA_HOST=http://192.168.1.100:11434
OLLAMA_MODEL=mistral
Setup:
-
Set a static IP on your Ollama server machine
-
Find your IP address:
- Windows:
ipconfig→ IPv4 Address - Mac:
ifconfig en0 | grep inet - Linux:
ip addr show
- Windows:
-
Configure firewall to allow port 11434
-
Test: Open browser to
http://192.168.1.100:11434(should show "Ollama is running")
Method 4: Dynamic IP (Current Session)
For temporary connections when IP changes:
OLLAMA_HOST=http://192.168.1.XXX:11434
OLLAMA_MODEL=llama2
Setup:
- Find current IP (see Method 3)
- Update
.enveach time IP changes - Consider using hostname (Method 2) instead for permanent setup
Method 5: Remote Server
For connecting to Ollama on a remote server:
OLLAMA_HOST=https://ollama.myserver.com
OLLAMA_MODEL=llama3.3
Setup:
- Set up Ollama on remote server
- Configure reverse proxy (nginx/caddy) with SSL
- Open firewall port (11434 or custom)
- Test connection:
curl https://ollama.myserver.com/api/tags
Method 6: Docker
If running Ollama in Docker:
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2
Setup:
docker run -d -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama pull llama2
Method 7: WSL (Windows Subsystem for Linux)
Accessing Windows host from WSL:
OLLAMA_HOST=http://host.docker.internal:11434
OLLAMA_MODEL=llama2
Or find Windows IP from WSL:
ip route | grep default | awk '{print $3}'
Available Commands
| Command | Description |
|---|---|
/help |
Show all available commands |
/models |
List available models |
/switch |
Switch to a different model |
/pull |
Download a new model |
/delete |
Delete a model |
/host |
Change Ollama host URL |
/config |
Show current configuration |
/settings |
Configure inference settings (temperature, etc.) |
/modelinfo |
Show detailed model information |
/history |
View chat history |
/search |
Search chat history |
/clear |
Clear conversation context |
/new |
Start a new chat session |
/multi |
Enter multi-line input mode |
/exit |
Exit the chat |
Inference Settings
Configure AI behavior with /settings:
-
Temperature (0.0-2.0): Controls creativity
- 0.1-0.3 for coding/math
- 0.6-0.8 for general chat
- 1.0-1.5 for creative writing
-
Top P (0.0-1.0): Nucleus sampling (default: 0.9)
-
Top K (1-100): Token choice limits (default: 40)
-
Context Window (128-32768): Conversation memory (default: 2048)
-
Max Output (1-4096): Response length (default: 512)
Troubleshooting
"Could not connect to Ollama"
-
Verify Ollama is running:
curl http://localhost:11434 # Should return: "Ollama is running"
-
Check firewall settings:
- Windows: Allow port 11434 in Windows Firewall
- Mac: System Preferences → Security → Firewall Options
- Linux:
sudo ufw allow 11434
-
For hostname issues:
# Test if hostname resolves ping your-hostname.local # If it fails, use IP address instead
-
For Docker:
docker ps # Verify container is running docker logs ollama # Check logs
"Model not found"
# List available models
ollama list
# Pull the model you want
ollama pull llama2
"Permission denied"
# Make sure config directory is writable
chmod 755 ~/.ollama_chat_config.json
File Structure
ollama-chat-cli/
├── ollama_chat/
│ ├── __init__.py
│ ├── __main__.py
│ ├── cli.py # Main CLI application
│ ├── ui.py # UI components and banner
│ ├── api.py # Ollama API client
│ ├── config.py # Configuration management
│ ├── commands.py # Command handlers
│ └── history.py # Chat history manager
├── .env.example # Example configuration
├── .gitignore
├── setup.py
├── requirements.txt
└── README.md
Security Notes
.envfile is gitignored and won't be committed- Config files are stored in your home directory
- Never commit your actual
.envfile to Git - For remote connections, use HTTPS with proper SSL certificates
License
MIT License - feel free to use and modify!
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues, please open an issue on GitHub.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ollama_remote_chat_cli-0.1.0.tar.gz.
File metadata
- Download URL: ollama_remote_chat_cli-0.1.0.tar.gz
- Upload date:
- Size: 20.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e6ff45fb04cb879172b61cebb9ec3cebc07e7f2eed4d25c35ae617b3eb32f91
|
|
| MD5 |
c4d94642719e318088498d83ddb98c70
|
|
| BLAKE2b-256 |
e7ccd231ce48caa6511658bc3e26775e73a8458807829c56c44f18eeeaca5d65
|
File details
Details for the file ollama_remote_chat_cli-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ollama_remote_chat_cli-0.1.0-py3-none-any.whl
- Upload date:
- Size: 18.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
919fd9a1a2a70d125c8c6d896daeba6d680211e8db61e11932c86b2de8cbe7dd
|
|
| MD5 |
6b4ccd5e6daab55467f4a154912ba5b8
|
|
| BLAKE2b-256 |
29db600f44b9f7f12feaf41e1f53846c40a215c597e4dec8c09cd9bf38425d80
|