AI command-line agent for terminal automation and task execution
Project description
VEXIS-CLI-2.0
๐ง Advanced 5-Phase AI-powered terminal automation system
Transform natural language into precise terminal commands with intelligent multi-phase processing
๐ Quick Start โข ๐ Documentation โข ๐ฏ Features โข โ๏ธ Configuration โข ๐ค Contributing
โจ Why VEXIS-CLI-2.0?
VEXIS-CLI-2.0 represents a quantum leap in command-line automation, featuring a sophisticated 5-Phase Pipeline Architecture that delivers unprecedented accuracy and reliability in natural language to command translation.
๐ฏ "Create a backup of my documents folder" โ Intelligent backup with verification
๐ฏ "Find all Python files with syntax errors" โ Multi-stage code analysis with reporting
๐ฏ "Set up a development environment for React" โ Complete environment setup with validation
๐ Revolutionary Features
๐ง 5-Phase Pipeline Architecture
- Phase 1: Command Suggestion - AI analyzes intent and suggests approach
- Phase 2: Command Extraction - Precise command isolation and validation
- Phase 3: Command Execution - Safe terminal execution with monitoring
- Phase 4: Log Evaluation - Intelligent error analysis and retry logic
- Phase 5: Summary Generation - Comprehensive result reporting
โก Advanced Execution Engine
- Multi-iteration error recovery with self-correction
- Real-time progress tracking and status updates
- Intelligent fallback mechanisms across providers
- Comprehensive safety validation and rollback capabilities
๐ Universal AI Provider Ecosystem
- 16+ AI providers with unified interface abstraction
- Automatic provider selection based on task requirements
- Seamless fallback and load balancing across providers
- Vision API support for image-based tasks
๐ก๏ธ Enterprise-Grade Architecture
- Zero-defect configuration management with validation
- Comprehensive logging with structured output
- Platform abstraction for cross-platform compatibility
- Advanced error handling with detailed diagnostics
๐จ Modern User Experience
- Rich terminal interface with syntax highlighting
- Interactive provider selection with performance metrics
- Real-time execution monitoring and feedback
- Comprehensive documentation and examples
๐ค AI Provider Ecosystem
๐ Local & Privacy-First
๐ฆ Ollama - Complete local AI integration
Recommended 2026 models: llama-4-scout-17b, deepseek-r1, qwen2.5:7b
โ๏ธ 2026 Cloud Powerhouses
| Provider | 2026 Models | Speed | Specialty |
|---|---|---|---|
| ๐ Groq | Llama 3.3 70B, GPT-OSS 120B | โกโกโกโกโก | Ultra-fast inference |
| ๐ฎ Google | Gemini 3.1 Pro, Gemini 3 Flash | โกโกโกโก | Enterprise reliability |
| ๐ง OpenAI | GPT-5.4, GPT-5.4-pro, GPT-5-mini | โกโกโกโก | Advanced reasoning |
| ๐ญ Anthropic | Claude Opus 4.6, Claude Sonnet 4.5 | โกโกโกโก | Analytical excellence |
| โก xAI | Grok 4.20, Grok 4.20-beta | โกโกโกโก | Real-time knowledge |
| ๐ฆ Meta | Llama 4 Scout 17B | โกโกโก | Open-source leadership |
| ๐ Mistral | Latest multilingual models | โกโกโก | Global applications |
| ๐ท Microsoft | GPT-5.4-pro via Azure | โกโกโก | Enterprise integration |
| ๐๏ธ AWS | Claude Opus 4.6 via Bedrock | โกโกโก | Scalable infrastructure |
| ๐ฏ Cohere | Latest business models | โกโกโก | Enterprise workflows |
| ๐ DeepSeek | DeepSeek R1, DeepSeek V4 | โกโกโก | Technical reasoning |
| ๐ค Together | Llama 4 hosting | โกโกโก | Custom model deployment |
| ๐ฎ MiniMax | Latest generation models | โกโกโก | Creative tasks |
| ๐จ๐ณ Zhipu | GLM latest models | โกโกโก | Chinese language |
๐ก 2026 Recommendations: Groq (speed), Google Gemini 3.1 (reliability), OpenAI GPT-5.4 (capability), Ollama Llama 4 (privacy)
๐ Installation
๐ฏ Zero-Configuration Quick Start
git clone https://github.com/vexis-project/VEXIS-CLI-2.0.git
cd VEXIS-CLI-2.0
python3 run.py "list files" # Auto-setup and run!
โ System Requirements
- Python 3.8+ (wide compatibility)
- 4GB+ RAM for local models (8GB+ recommended for Llama 4)
- API keys for cloud providers
- Optional: Ollama for local AI (
curl -fsSL https://ollama.ai/install.sh | sh) - Tested on: Ubuntu and macOS
โ ๏ธ Note: Bugs may occur with certain models or providers. If you encounter issues, please try selecting a different model or provider. We will fix the issue as soon as the cause is identified.
๐จ First Run Experience
VEXIS-CLI-2.0 features an enhanced provider selection interface with real-time performance metrics:
๐ป Usage Examples
๐ 5-Phase Pipeline in Action
# Simple operations with intelligent validation
python3 run.py "create a comprehensive README for my project"
python3 run.py "find and organize files larger than 10MB by date"
python3 run.py "set up Python development environment with testing"
# Complex multi-step tasks
python3 run.py "analyze all Python files for security vulnerabilities"
python3 run.py "deploy this React application to production with monitoring"
python3 run.py "optimize system performance and generate detailed report"
# Advanced automation
python3 run.py "create automated backup system with encryption and verification"
python3 run.py "monitor system resources and alert on anomalies for 24 hours"
๐๏ธ Advanced Configuration
# Use specific provider with 5-phase pipeline
python3 run.py "complex task" --provider groq --model llama-3.3-70b-versatile
# Enable debug mode with detailed phase logging
python3 run.py "debug task" --debug --phase-logging
# Skip interactive prompts (uses configured preferences)
python3 run.py "quick automation" --no-prompt --auto-confirm
# Vision-enabled tasks
python3 run.py "analyze this screenshot and suggest improvements" --image screenshot.png
โ๏ธ Configuration
๐ Advanced Configuration System
VEXIS-CLI-2.0 uses a hierarchical configuration system with validation:
# API Configuration
api:
preferred_provider: "groq"
local_endpoint: "http://localhost:11434"
local_model: "llama-4-scout-17b"
timeout: 120
max_retries: 3
auto_fallback: true
# 5-Phase Engine Configuration
engine:
command_timeout: 30
task_timeout: 300
max_iterations: 10
enable_phase_logging: false
auto_recovery: true
# User Preferences
user:
name: "Your Name"
preferred_style: "detailed" # "concise", "detailed", "friendly"
auto_confirm: false
show_progress: true
# Logging Configuration
logging:
level: "INFO"
file: "vexis.log"
json_format: false
console: true
max_file_size: 10485760 # 10MB
๐ฏ 2026 Model Recommendations
- ๐ Local:
llama4-scout-17b(balanced),deepseek-r1(reasoning),qwen2.5:7b(speed) - โ๏ธ Cloud:
gpt-5.4-pro(professional),gemini-3.1-pro(enterprise),claude-opus-4.6(analytical)
๐๏ธ Advanced Architecture
๐ง 5-Phase Pipeline Engine
graph TB
A[Natural Language Input] --> B[Phase 1: Command Suggestion]
B --> C[AI Intent Analysis]
B --> D[Strategy Generation]
B --> E[Risk Assessment]
C --> F[Phase 2: Command Extraction]
D --> F
E --> F
F --> G[Command Isolation]
F --> H[Validation & Sanitization]
G --> I[Phase 3: Command Execution]
H --> I
I --> J[Safe Terminal Execution]
I --> K[Real-time Monitoring]
J --> L[Phase 4: Log Evaluation]
K --> L
L --> M[Error Analysis]
L --> N[Retry Decision Logic]
M --> O[Phase 5: Summary Generation]
N --> O
O --> P[Comprehensive Reporting]
O --> Q[Success Verification]
P --> R[Task Complete]
Q --> R
๐๏ธ Core Components
- ๐ฏ FivePhaseEngine - Advanced 5-phase pipeline orchestration
- ๐ค ModelRunner - Unified 16+ provider abstraction with fallback
- ๐ CommandParser - Enhanced NLP with context awareness
- โ TaskVerifier - Multi-layer validation and security
- ๐ TaskRobustnessManager - Advanced error recovery and retry logic
- ๐ TerminalHistory - Comprehensive execution tracking
๐ ๏ธ Development & Contributing
๐ค Contributing to VEXIS-CLI-2.0
We welcome contributions to our advanced AI automation platform:
- ๐ Bug Reports: Use our detailed issue templates for precise reporting
- ๐ก Feature Requests: Propose enhancements to the 5-phase pipeline
- ๐ง Pull Requests: Follow our strict code quality standards
- ๐ Documentation: Help maintain our comprehensive docs
- ๐งช Testing: Contribute to our extensive test coverage
๐งช Advanced Testing Suite
# Run comprehensive test suite
python3 -m pytest tests/ --cov=src
# Test 5-phase pipeline components
python3 test_five_phase_engine.py
# Test provider integrations
python3 test_cloud_models.py --all-providers
# System validation
python3 check_environment.py --full-validation
python3 system_check.py --performance-test
๐ง Development Tools
# Dependency management
python3 manage_sdks.py --install-all
# Model validation
python3 check_models.py --validate-2026-models
# Performance benchmarking
python3 test_improved_prompts.py --benchmark
๐ Comprehensive Documentation
| Document | Focus | Link |
|---|---|---|
| ๐ Architecture Guide | 5-Phase pipeline deep dive | docs/ARCHITECTURE.md |
| โ๏ธ Configuration Reference | Complete configuration options | docs/CONFIGURATION.md |
| ๐ง API Reference | Provider integration guide | docs/API_REFERENCE.md |
| ๐ Deployment Guide | Production deployment | docs/DEPLOYMENT.md |
| ๐ ๏ธ Development Guide | Contributing and development | docs/DEVELOPMENT.md |
| ๐ Troubleshooting | Common issues and solutions | docs/TROUBLESHOOTING.md |
| ๐ฆ Ollama Integration | Local AI setup and optimization | docs/OLLAMA_INTEGRATION.md |
| โก Error Handling | Advanced error management | docs/ERROR_HANDLING.md |
๐ Community & Enterprise Support
๐ฌ Get Help
- ๐ Comprehensive Documentation
- ๐ Advanced Issue Tracker
- ๐ฌ Community Discussions
- ๐ข Enterprise Support
โญ Show Your Support
- โญ Star the repository - Help others discover VEXIS-CLI-2.0
- ๐ Fork and contribute - Build on our 5-phase architecture
- ๐ Share your use cases - Inspire the community with innovative applications
๐ Experience the Future of Terminal Automation
VEXIS-CLI-2.0: Where advanced AI meets precise command execution
๐ Get Started Now โข โญ Star on GitHub โข ๐ Explore Documentation โข ๐ข Enterprise Support
Built with โค๏ธ by the VEXIS Project
Pushing the boundaries of AI-powered automation
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vexis_cli-2.0.1.tar.gz.
File metadata
- Download URL: vexis_cli-2.0.1.tar.gz
- Upload date:
- Size: 103.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e15baaaf50ed5edbc919e8baee6483cc2c4d05169912cee0a3ba03689fe7717
|
|
| MD5 |
febba8553b78805506b093a0a789d71b
|
|
| BLAKE2b-256 |
0c19f59b75635df9a47bdfea06556349e3a449b8bd47b89bd5d250fe17869223
|
File details
Details for the file vexis_cli-2.0.1-py3-none-any.whl.
File metadata
- Download URL: vexis_cli-2.0.1-py3-none-any.whl
- Upload date:
- Size: 116.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e111c9804a0a31cbe3bea643bccddf70037aa8034082e364fe7ce9280f5723c1
|
|
| MD5 |
4a8f4aa8fcdc1bb9e43395ae44100ce6
|
|
| BLAKE2b-256 |
beeeb9902d6ef56703bba7f6b2454a66c586566049f59fd125f0b329966868c5
|