Skip to main content

AI command-line agent for terminal automation and task execution

Project description

VEXIS-CLI-1

VEXIS CLI Logo Python Version License Platform

AI-Powered Command Line Interface for Intelligent Automation

VEXIS-CLI-1 is an AI agent derived from VEXIS-1.1 that performs tasks through command execution. It leverages large language models to intelligently interpret natural language commands and execute them through terminal operations, enabling automated workflow management and system administration.

Quick StartDocumentationModelsConfigurationContributing

Key Features

AI-Powered Intelligence

  • Natural Language Processing: Execute commands using plain English descriptions
  • Context-Aware Execution: Understands your workflow and adapts to your needs
  • Multi-Model Support: Compatible with 80+ AI models from 12 major providers
  • Smart Verification: Automatic task completion validation with confidence scoring

Advanced Automation

  • Two-Phase Engine: Planning and execution phases for reliable task completion
  • Cross-Platform Compatibility: Works seamlessly on macOS, Linux, and Windows
  • GUI Automation: Integrate terminal commands with graphical interface interactions
  • Screenshot Integration: Visual context capture for enhanced understanding

Developer Experience

  • Rich Terminal UI: Beautiful, informative output with progress indicators
  • Flexible Configuration: YAML-based settings with environment variable overrides
  • Extensible Architecture: Plugin-ready design for custom integrations
  • Comprehensive Logging: Structured logging for debugging and monitoring

🚀 Quick Start

Prerequisites

  • Python 3.8 or higher
  • Ollama installed and running (for local AI models)
  • Git (for cloning the repository)

Installation

  1. Clone the repository

    git clone https://github.com/AInohogosya-team/VEXIS-CLI-1.git
    cd VEXIS-CLI-1
    
  2. Install dependencies

    pip install -r requirements.txt
    
  3. Set up Ollama (optional, for local models)

    # Install Ollama
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Pull a recommended model
    ollama pull llama3.2:latest
    
  4. Run VEXIS-CLI

    python run.py
    

Your First Command

# Start the interactive interface
python run.py

# Or use direct commands
vexis-cli "List all Python files in the current directory"

Supported AI Models

VEXIS-CLI-1 supports 150+ models from 20 major providers through Ollama:

Core Providers

  • Meta: Llama 3.1/3.2 series (8B, 70B, 1B, 3B variants)
  • Google: Gemma 2/3 series (1B-27B parameters, multimodal capabilities)
  • DeepSeek: R1/V3/Coder series (8B-671B, reasoning and coding specialists)
  • Microsoft: Phi-3/4 series (3.8B-14B, efficient small models)
  • Mistral: Mistral/Large/Ministral series (7B-675B, European open-source leader)

Advanced Providers

  • Alibaba (Qwen): Qwen 2.5/3 series (0.5B-235B, multilingual with 128K+ context)
  • IBM: Granite/Code series (350M-34B, enterprise-grade models)
  • BigCode: StarCoder 2 series (3B-15B, specialized for code generation)
  • Cohere: Command R series (7B-35B, retrieval-augmented generation)
  • 01.AI: Yi series (1.5B-34B, bilingual models)

Specialized Models

  • Vision-Language: LLaVA, Moondream, Qwen3-VL (7B-235B)
  • Coding: DeepSeek Coder, Qwen Coder, Granite Code, StarCoder 2
  • Agentic: Hermes 3, Reflection, Devstral Small 2 (3B-405B)
  • Cloud-Only: GPT-OSS, Gemini 3, GLM-5, MiniMax, Kimi (20B-744B)

Cloud & Local Models

  • Local Models: Run entirely on your machine with Ollama
  • Cloud Models: Access high-performance models via API
  • Hybrid Mode: Seamlessly switch between local and cloud models
Complete Model List

Popular Local Models:

  • llama3.2:latest (3B) - Balanced performance with 128K context
  • qwen2.5:7b - Multilingual capabilities with 128K context
  • mistral:7b - Fast and efficient with 32K context
  • deepseek-r1:8b - Advanced reasoning with 128K context
  • gemma2:9b - High-performing with 8K context
  • phi3:mini - Efficient small model with 4K context

High-Performance Cloud Models:

  • deepseek-v3:671b - State-of-the-art MoE with 160K context
  • qwen3:235b - Advanced MoE with 256K context
  • mistral-large-3:675b-cloud - Multimodal enterprise model
  • gpt-oss:120b-cloud - Frontier performance
  • gemini-3-flash-preview:cloud - Built for speed

Usage Examples

Quick Start

# Start the interactive interface
python run.py

# Or use direct commands
vexis-cli "List all Python files in the current directory"

For detailed usage examples, see our Detailed Guide.

Documentation

Community

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support


Back to top

Made with ❤️ by the AInohogosya-team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vexis_cli-1.0.7.tar.gz (101.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vexis_cli-1.0.7-py3-none-any.whl (121.0 kB view details)

Uploaded Python 3

File details

Details for the file vexis_cli-1.0.7.tar.gz.

File metadata

  • Download URL: vexis_cli-1.0.7.tar.gz
  • Upload date:
  • Size: 101.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for vexis_cli-1.0.7.tar.gz
Algorithm Hash digest
SHA256 e84df59cca7361bf7f407d68ffda11b13d64c98f5611d39290103b9b94efb14e
MD5 66bc38523ed1a97f535507b1d83c4ce6
BLAKE2b-256 3c2c3c6e007dc3c6479d61298d6b049f56319428b3ae97085fe019bdc9080215

See more details on using hashes here.

File details

Details for the file vexis_cli-1.0.7-py3-none-any.whl.

File metadata

  • Download URL: vexis_cli-1.0.7-py3-none-any.whl
  • Upload date:
  • Size: 121.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for vexis_cli-1.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 5453e885fae1e865ac3823980a313896d5bb86eeccd42c288bb9b5578eeb164b
MD5 a40316644a40b5c1a96802af9f55b66b
BLAKE2b-256 c573a4c58c83086dcbf5eca0d783da9ca7d27a6576e7907f888ca509e73ae8e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page