AI-powered shell command generator with dual provider support and safety features
Project description
AI Shell Command Generator
An AI-powered shell command generator that creates accurate shell commands from natural language descriptions. Supports both cloud-based AI (Anthropic Claude) and local AI models (Ollama), with intelligent risk assessment and OS-aware command generation.
✨ Features
🤖 Dual AI Provider Support
- Anthropic Claude 3.5 Haiku - Fast, cloud-based AI for reliable command generation
- Ollama Integration - Use local models like OpenAI's gpt-oss, Qwen, DeepSeek, and more
- Interactive Model Selection - Discover and choose from available Ollama models
- Automatic model detection and fallback
🖥️ OS-Aware Command Generation
- Auto-detects macOS, Linux, and Windows
- Generates BSD vs GNU compatible commands
- Prevents platform-specific errors (e.g., avoids GNU
-printfon macOS)
⚠️ AI-Powered Risk Assessment
- Analyzes every generated command for potential risks
- Color-coded warnings for dangerous operations
- Identifies data deletion, permission changes, system modifications, and more
- Can be disabled with
--no-risk-checkflag
💻 Flexible Usage Modes
- Interactive Mode - Guided command generation with safety prompts
- Non-Interactive Mode - Perfect for scripting and automation
- Auto-Copy - Automatically copy commands to clipboard
🛡️ Safety Features
- All warnings displayed in red for high visibility
- Risk levels: HIGH, MEDIUM, LOW
- Detailed explanations of potential dangers
- Optional risk assessment bypass for trusted automation
🚀 Installation
From PyPI (Recommended)
pip install ai-shell-command-generator
From Source
git clone https://github.com/codingthefuturewithai/ai-shell-command-generator.git
cd ai-shell-command-generator
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install -e .
Requirements: Python 3.10 or higher
Development Setup
# Install all dependencies including test tools
pip install -r requirements.txt
# Run tests
python -m pytest tests/ -v
# Run specific test file
python -m pytest tests/unit/test_main.py -v
⚙️ Setup
For Anthropic Claude (Cloud AI)
- Get your API key from Anthropic Console
- Create a
.envfile:
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
For Ollama (Local AI)
- Install Ollama: https://ollama.ai/
- Pull a model (optional, defaults to gpt-oss:latest):
ollama pull gpt-oss:latest
ollama pull qwen2.5-coder:7b
ollama pull deepseek-r1:8b
📖 Usage
Interactive Mode
ai-shell
# or
aisc
In interactive mode, you'll be prompted to:
- Select AI Provider - Choose between Anthropic Claude or Ollama
- Select Shell Environment - Choose cmd, powershell, or bash
- Select Ollama Model (if using Ollama) - Choose from available models
- Enter Commands - Describe what you want to do in natural language
Non-Interactive Mode
# Basic usage
ai-shell -p anthropic -s bash -q "find all Python files modified today"
# With Ollama
ai-shell -p ollama -s bash -q "list running processes using more than 100MB RAM"
# Auto-copy to clipboard
ai-shell -p anthropic -s bash -q "backup my documents" --copy
# Use specific Ollama model
ai-shell -p ollama -m qwen2.5-coder:7b -s bash -q "analyze disk usage"
# Disable risk assessment (for automation)
ai-shell -p ollama -s bash -q "clean temp files" --no-risk-check
Command Line Options
Options:
-p, --provider [anthropic|ollama] AI provider to use
-s, --shell [cmd|powershell|bash] Shell environment
-q, --query TEXT Command query (non-interactive mode)
-m, --model TEXT Specific Ollama model (default: gpt-oss:latest)
--no-risk-check Disable risk assessment
-c, --copy Automatically copy command to clipboard
--help Show help message
🎯 Examples
Safe Commands
$ ai-shell -p anthropic -s bash -q "list files in current directory"
ls
$ ai-shell -p ollama -s bash -q "show disk usage" --copy
df -h
# Command copied to clipboard!
Risky Commands (with warnings)
$ ai-shell -p anthropic -s bash -q "delete all .log files"
# WARNING: HIGH risk - Recursively deletes all log files without confirmation
find . -type f -name "*.log" -delete
$ ai-shell -p ollama -s bash -q "change permissions to 777"
# WARNING: HIGH risk - Grants full permissions to all users, security vulnerability
chmod 777
Complex Queries
# Find large files with grouping
ai-shell -p anthropic -s bash -q "find files larger than 50MB in ~/projects, group by extension, exclude node_modules"
# Process monitoring
ai-shell -p ollama -s bash -q "show processes using more than 100MB memory, sorted by usage"
# Text processing
ai-shell -p anthropic -s bash -q "search all JavaScript files for console.log statements, show line numbers"
🖼️ Screenshots
Interactive Mode with Risk Assessment
Interactive Ollama Model Selection
Interactive mode with Ollama model discovery and selection - users can choose from all available models
Non-Interactive Mode
Ollama Integration with Auto-Copy
🔧 Advanced Configuration
Environment Variables
# .env file
ANTHROPIC_API_KEY=your_anthropic_api_key
OLLAMA_HOST=localhost:11434 # Optional: custom Ollama host
Available Ollama Models
The tool automatically detects available models, but you can specify any:
gpt-oss:latest- OpenAI's open-source modelqwen2.5-coder:7b- Qwen coding modeldeepseek-r1:8b- DeepSeek reasoning modelmistral-nemo:12b- Mistral model- Any other Ollama model
🏗️ Architecture
Command Generation Flow
- Query Processing - Natural language query analysis
- OS Detection - Platform-specific command generation
- AI Generation - Provider-specific command creation
- Risk Assessment - Safety analysis of generated command
- User Interaction - Display and clipboard options
Risk Assessment Categories
- Data Deletion -
rm,dd, destructive operations - Permission Changes -
chmod,chown, security implications - System Modifications - Network changes, system files
- Recursive Operations - Potential for widespread changes
- Network Exposure - Security vulnerabilities
🧪 Testing
# Run unit tests
python -m unittest test_main.py -v
# Test specific functionality
python main.py -p anthropic -s bash -q "test command" --no-risk-check
🤝 Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and add tests
- Commit your changes:
git commit -am 'Add feature' - Push to the branch:
git push origin feature-name - Submit a pull request
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Anthropic for Claude AI
- Ollama for local AI model support
- Click for CLI framework
- OpenAI for open-source models
📚 Documentation
For more detailed documentation, examples, and troubleshooting, visit our GitHub repository.
Made with ❤️ for developers who want to work smarter, not harder.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_shell_command_generator-0.1.0.tar.gz.
File metadata
- Download URL: ai_shell_command_generator-0.1.0.tar.gz
- Upload date:
- Size: 281.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b1a54b36f40fcc744edc7aaa8454b8c0487c504f016288e8977c7901fc4d275a
|
|
| MD5 |
1a131d73ff844addca6cb087c069d354
|
|
| BLAKE2b-256 |
ded00782e3fd378fbaa5695505c26571f2c91718cf53b34422b1fd496f11e6d9
|
File details
Details for the file ai_shell_command_generator-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ai_shell_command_generator-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2277adb464692d428071c589c9f737b8d26e4114648c4b37945f207950cfcc0a
|
|
| MD5 |
38ac065d88e877f315d4a8ba2379c91d
|
|
| BLAKE2b-256 |
ce87f1e51ae7dfd662446a89adc33fb47f4a91203535e8c55a5c6bc1da6d5b78
|