Convert natural language to shell commands using LLMs (GPT4All, OpenAI, Ollama)
Project description
🐚 llmshell-cli
A powerful Python CLI tool that converts natural language into Linux/Unix shell commands using LLMs.
✨ Features
- 🤖 Multiple LLM Backends: GPT4All (local, default), OpenAI, Ollama, or custom APIs
- 🔒 Privacy-First: Uses GPT4All locally by default - no data leaves your machine
- 🎯 Smart Command Generation: Converts natural language to accurate shell commands
- ✅ Safe Execution: Confirmation prompts before running commands
- 🎨 Beautiful Output: Colored terminal output using Rich
- ⚙️ Flexible Configuration: YAML-based config at
~/.llmshell/config.yaml - 🔧 Easy Setup: Auto-downloads models, handles fallbacks gracefully
📦 Installation
pip install llmshell-cli
Development Installation
git clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"
🚀 Quick Start
Generate a Command
llmshell run "list all docker containers"
# Output: docker ps -a
Get Command with Explanation
llmshell run "find large files" --explain
Dry Run (Don't Execute)
llmshell run "remove all logs" --dry-run
Auto-Execute (Skip Confirmation)
llmshell run "show disk usage" --execute
# Note: Dangerous commands will still require confirmation
📖 CLI Commands
llmshell run
Generate and optionally execute shell commands:
llmshell run "your natural language request"
llmshell run "list python files" --dry-run
llmshell run "check memory usage" --explain
llmshell run "restart nginx" --execute
Options:
--dry-run/-d: Show command without executing--explain/-x: Include explanation with the command--execute/-e: Skip confirmation prompt (except for dangerous commands)--backend/-b: Override default backend (gpt4all, openai, ollama, custom)
Safety Note: Dangerous commands (like rm -rf /, mkfs, etc.) will always require confirmation, even with --execute.
llmshell config
Manage configuration:
# Show current configuration
llmshell config show
# Set a configuration value
llmshell config set llm_backend openai
llmshell config set backends.openai.api_key sk-xxxxx
# List available backends
llmshell config backends
llmshell model
Manage GPT4All models:
# Show available models to download
llmshell model show-available
# Install/download the default model
llmshell model install
# Install a specific model
llmshell model install --name Meta-Llama-3-8B-Instruct.Q4_0.gguf
# List installed models
llmshell model list
llmshell doctor
Diagnose setup and check backend availability:
llmshell doctor
Output shows:
- Configuration file status
- Available backends
- Model installation status
- API connectivity
⚙️ Configuration
Configuration is stored at ~/.llmshell/config.yaml:
llm_backend: gpt4all
backends:
gpt4all:
model: mistral-7b-instruct-v0.2.Q4_0.gguf
model_path: null # Auto-detected
openai:
api_key: sk-your-api-key-here
model: gpt-4-turbo
base_url: null # Optional custom endpoint
ollama:
model: llama3
api_url: http://localhost:11434
custom:
api_url: https://your-llm-endpoint/v1/chat/completions
headers:
Authorization: Bearer YOUR_TOKEN
execution:
auto_execute: false
confirmation_required: true
output:
colored: true
verbose: false
🔧 Backend Setup
GPT4All (Default - Local)
No setup required! On first run:
# Show available models
llmshell model show-available
# Install a model (default: Meta Llama 3)
llmshell model install
# Or install a specific model
llmshell model install --name Phi-3-mini-4k-instruct.Q4_0.gguf
This downloads the model locally (~2-5GB depending on the model).
OpenAI
- Get API key from OpenAI
- Configure:
llmshell config set backends.openai.api_key sk-xxxxx
llmshell config set llm_backend openai
Ollama
- Install Ollama
- Pull a model:
ollama pull llama3
- Configure:
llmshell config set llm_backend ollama
Custom API
For any OpenAI-compatible API:
llmshell config set llm_backend custom
llmshell config set backends.custom.api_url https://your-endpoint
llmshell config set backends.custom.headers.Authorization "Bearer TOKEN"
💡 Usage Examples
# Docker commands
llmshell run "stop all running containers"
llmshell run "remove unused images"
# File operations
llmshell run "find files modified in last 24 hours"
llmshell run "compress all logs to archive"
# System monitoring
llmshell run "show top 10 memory-consuming processes"
llmshell run "check disk space on all mounts"
# Git operations
llmshell run "show commits from last week"
llmshell run "list branches sorted by recent activity"
# Network operations
llmshell run "check if port 8080 is open"
llmshell run "show active network connections"
🐍 Python API
You can also use llmshell programmatically:
from gpt_shell.config import Config
from gpt_shell.llm_manager import LLMManager
# Initialize
config = Config()
manager = LLMManager(config)
# Generate command
command = manager.generate_command("list all docker containers")
print(f"Generated: {command}")
# With explanation
result = manager.generate_command("find large files", explain=True)
print(result)
🐳 Docker Support
Run llmshell in a Docker container for isolated environments.
Quick Start
# Build the image
docker build -t llmshell:latest .
# Run a command
docker run --rm llmshell:latest run "list files"
# With persistent config
docker run -it --rm \
-v llmshell-data:/root/.llmshell \
llmshell:latest model install
Using Docker Compose
# Interactive mode
docker-compose run --rm llmshell
# Inside container
llmshell run "show disk usage"
For detailed Docker instructions, see DOCKER.md
🧪 Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=gpt_shell --cov-report=html
# Run specific test file
pytest tests/test_config.py
🛠️ Development
Setup
# Clone and install
git clone https://github.com/imgnr/llmshell-cli.git
cd llmshell-cli
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src tests
# Type checking
mypy src
# Linting
flake8 src tests
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Ensure all tests pass
- Submit a pull request
📋 Requirements
- Python 3.8+
- ~4GB disk space for GPT4All model (optional)
- Internet connection (for OpenAI/Ollama/custom backends)
🔒 Privacy
- GPT4All: All processing happens locally, no data sent anywhere
- OpenAI/Custom APIs: Commands are sent to external services
- Ollama: Runs locally, no data sent to external servers
🐛 Troubleshooting
GPT4All model not found
llmshell model install
OpenAI API errors
llmshell config set backends.openai.api_key sk-xxxxx
llmshell doctor
Ollama not connecting
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve
Configuration issues
# Reset to defaults
rm ~/.llmshell/config.yaml
llmshell config show
📝 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- GPT4All - Local LLM runtime
- Typer - CLI framework
- Rich - Terminal formatting
- OpenAI - API integration
- Ollama - Local LLM platform
📚 More Examples
System Administration
llmshell run "create a backup of /etc directory"
llmshell run "find processes using more than 1GB RAM"
llmshell run "schedule a cron job for midnight"
Development
llmshell run "count lines of code in this project"
llmshell run "find all TODO comments in python files"
llmshell run "generate requirements.txt from imports"
Data Processing
llmshell run "extract column 2 from CSV file"
llmshell run "convert all PNG images to JPG"
llmshell run "merge all text files into one"
Made with ❤️ for developers who prefer typing naturally
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmshell_cli-0.0.2.tar.gz.
File metadata
- Download URL: llmshell_cli-0.0.2.tar.gz
- Upload date:
- Size: 23.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fafcbe8f6ee31798c803a4319c2023881f41a764c9782acd4a22243fb9733953
|
|
| MD5 |
ca23075f044326707880a6aafb2f9c60
|
|
| BLAKE2b-256 |
004c42d7e1f2ee87eae73200090a45561412b90bc779292bd6d32f827ae7e432
|
File details
Details for the file llmshell_cli-0.0.2-py3-none-any.whl.
File metadata
- Download URL: llmshell_cli-0.0.2-py3-none-any.whl
- Upload date:
- Size: 20.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40088f1a6f3a055830dd7ab869381709733e81555f18de7a65508d506aa49514
|
|
| MD5 |
5f0e45bef046cb41e3af70819f3ea387
|
|
| BLAKE2b-256 |
3e3d27b8e276aad38e4fba9fa55aa955844cf6dc1da1486ab1595fabe68ed205
|