Skip to main content

An Artificial Intelligence Automation Platform. AI Instruction management from various providers, has an adaptive memory, and a versatile plugin system with many commands including web browsing. Supports many AI providers and models and growing support every day.

Project description

AGiXT

GitHub PayPal Ko-Fi

GitHub GitHub GitHub

GitHub pypi

GitHub npm

GitHub

Discord Twitter

Logo

AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.

Embracing the spirit of extremity in every facet of life, we introduce AGiXT. This advanced AI Automation Platform is our bold step towards the realization of Artificial General Intelligence (AGI). Seamlessly orchestrating instruction management and executing complex tasks across diverse AI providers, AGiXT combines adaptive memory, smart features, and a versatile plugin system to maximize AI potential. With our unwavering commitment to innovation, we're dedicated to pushing the boundaries of AI and bringing AGI closer to reality.

Table of Contents 📖

⚠️ Disclaimers

Monitor Your Usage

Please note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We're NOT responsible for your usage under any circumstances.

Key Features 🗝️

  • Context and Token Management: Adaptive handling of long-term and short-term memory for an optimized AI performance, allowing the software to process information more efficiently and accurately.
  • Smart Instruct: An advanced feature enabling AI to comprehend, plan, and execute tasks effectively. The system leverages web search, planning strategies, and executes instructions while ensuring output accuracy.
  • Interactive Chat & Smart Chat: User-friendly chat interface for dynamic conversational tasks. The Smart Chat feature integrates AI with web research to deliver accurate and contextually relevant responses.
  • Task Execution & Smart Task Management: Efficient management and execution of complex tasks broken down into sub-tasks. The Smart Task feature employs AI-driven agents to dynamically handle tasks, optimizing efficiency and avoiding redundancy.
  • Chain Management: Sophisticated handling of chains or a series of linked commands, enabling the automation of complex workflows and processes.
  • Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience, opening a wide range of possibilities for AI assistance.
  • Multi-Provider Compatibility: Seamless integration with leading AI providers such as OpenAI (as well as any that use OpenAI style endpoints), ezLocalai, Hugging Face, GPT4Free, Google Gemini, and more.
  • Versatile Plugin System & Code Evaluation: Extensible command support for various AI models along with robust support for code evaluation, providing assistance in programming tasks.
  • Docker Deployment: Simplified setup and maintenance through Docker deployment.
  • Audio-to-Text & Text-to-Speech Options: Integration with Hugging Face for seamless audio-to-text transcription, and multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.
  • Platform Interoperability & AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings along with easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.
  • Custom Prompts & Command Control: Granular control over agent abilities through enabling or disabling specific commands, and easy creation, editing, and deletion of custom prompts to standardize user inputs.
  • RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.
  • Expanding AI Support: Continually updated to include new AI providers and services, ensuring the software stays at the forefront of AI technology.

The features that AGiXT provides cover a wide range of services and are used for different tasks. Refer to Processes and Frameworks for more details about the services and framework.

Quick Start Guide

Operating System Prerequisites

Provide the following prerequisites based on the Operating System you use.

Windows and Mac Prerequisites

Linux Prerequisites

Installation

If you're using Linux, you may need to prefix the python command with sudo depending on your system configuration.

git clone https://github.com/Josh-XT/AGiXT
cd AGiXT
python start.py

The script will check for Docker and Docker Compose installation:

  • On Linux, it will attempt to install them if missing (requires sudo privileges).
  • On macOS and Windows, it will provide instructions to download and install Docker Desktop.

Usage

Run the script with Python:

python start.py

To run AGiXT with ezLocalai, use the --with-ezlocalai flag:

python start.py --with-ezlocalai true

You can also use command-line arguments to set specific environment variables to run in different ways. For example, to use the development branch and enable auto-updates, run:

python start.py --agixt-branch dev --agixt-auto-update true --with-ezlocalai true

Command-line Options

The script supports setting any of the environment variables via command-line arguments. Here's a detailed list of available options:

  1. --agixt-api-key: Set the AGiXT API key (automatically generated if not provided)
  2. --agixt-uri: Set the AGiXT URI (default: http://localhost:7437)
  3. --agixt-agent: Set the default AGiXT agent (default: AGiXT)
  4. --agixt-branch: Choose between stable and dev branches
  5. --agixt-file-upload-enabled: Enable or disable file uploads (default: true)
  6. --agixt-voice-input-enabled: Enable or disable voice input (default: true)
  7. --agixt-footer-message: Set the footer message (default: Powered by AGiXT)
  8. --agixt-require-api-key: Require API key for access (default: false)
  9. --agixt-rlhf: Enable or disable reinforcement learning from human feedback (default: true)
  10. --agixt-show-selection: Set which selectors to show in the UI (default: conversation,agent)
  11. --agixt-show-agent-bar: Show or hide the agent bar in the UI (default: true)
  12. --agixt-show-app-bar: Show or hide the app bar in the UI (default: true)
  13. --agixt-conversation-mode: Set the conversation mode (default: select)
  14. --allowed-domains: Set allowed domains for API access (default: *)
  15. --app-description: Set the application description
  16. --app-name: Set the application name (default: AGiXT Chat)
  17. --app-uri: Set the application URI (default: http://localhost:3437)
  18. --streamlit-app-uri: Set the Streamlit app URI (default: http://localhost:8501)
  19. --auth-web: Set the authentication web URI (default: http://localhost:3437/user)
  20. --auth-provider: Set the authentication provider (options: none, magicalauth)
  21. --create-agent-on-register: Create an agent named from your AGIXT_AGENT environment variable if it is different than AGiXT using settings from default_agent.json if defined (default: true)
  22. --create-agixt-agent: Create an agent called AGiXT and trains it on the AGiXT documentation upon user registration (default: true)
  23. --disabled-providers: Set disabled providers (comma-separated list)
  24. --disabled-extensions: Set disabled extensions (comma-separated list)
  25. --working-directory: Set the working directory (default: ./WORKSPACE)
  26. --github-client-id: Set GitHub client ID for authentication
  27. --github-client-secret: Set GitHub client secret for authentication
  28. --google-client-id: Set Google client ID for authentication
  29. --google-client-secret: Set Google client secret for authentication
  30. --microsoft-client-id: Set Microsoft client ID for authentication
  31. --microsoft-client-secret: Set Microsoft client secret for authentication
  32. --tz: Set the timezone (default: system timezone)
  33. --interactive-mode: Set the interactive mode (default: chat)
  34. --theme-name: Set the UI theme (options: default, christmas, conspiracy, doom, easter, halloween, valentines)
  35. --allow-email-sign-in: Allow email sign-in (default: true)
  36. --database-type: Set the database type (options: sqlite, postgres)
  37. --database-name: Set the database name (default: models/agixt)
  38. --log-level: Set the logging level (default: INFO)
  39. --log-format: Set the log format (default: %(asctime)s | %(levelname)s | %(message)s)
  40. --uvicorn-workers: Set the number of Uvicorn workers (default: 10)
  41. --agixt-auto-update: Enable or disable auto-updates (default: true)
  42. --with-streamlit: Enable or disable the Streamlit UI (default: true)

Options specific to ezLocalai:

  1. --with-ezlocalai: Start AGiXT with ezLocalai integration.
  2. --ezlocalai-uri: Set the ezLocalai URI (default: http://{local_ip}:8091)
  3. --default-model: Set the default language model for ezLocalai (default: QuantFactory/dolphin-2.9.2-qwen2-7b-GGUF)
  4. --vision-model: Set the vision model for ezLocalai (default: deepseek-ai/deepseek-vl-1.3b-chat)
  5. --llm-max-tokens: Set the maximum number of tokens for language models (default: 32768)
  6. --whisper-model: Set the Whisper model for speech recognition (default: base.en)
  7. --gpu-layers: Set the number of GPU layers to use (automatically determined based on available VRAM but can be modified.) (default: -1 for all)

For a full list of options with their current values, run:

python start.py --help

Docker Deployment

After setting up the environment variables and ensuring Docker and Docker Compose are installed, the script will:

  1. Stop any running AGiXT Docker containers
  2. Pull the latest Docker images (if auto-update is enabled)
  3. Start the AGiXT services using Docker Compose

Troubleshooting

  • If the script fails to run on Linux, run it with sudo.
  • If you encounter any issues with Docker installation:
    • On Linux, ensure you have sudo privileges and that your system is up to date.
    • On macOS and Windows, follow the instructions to install Docker Desktop manually if the script cannot install it automatically.
  • Check the Docker logs for any error messages if the containers fail to start.
  • Verify that all required ports are available and not in use by other services.
  • If the python command is not recognized, try using python3 instead.

Security Considerations

  • The AGIXT_API_KEY is automatically generated if not provided. Ensure to keep this key secure and do not share it publicly.
  • When using authentication providers (GitHub, Google, Microsoft), ensure that the client IDs and secrets are kept confidential.
  • Be cautious when enabling file uploads and voice input, as these features may introduce potential security risks if not properly managed.

Configuration

Each AGiXT Agent has its own settings for interfacing with AI providers, and other configuration options. These settings can be set and modified through the web interface.

Documentation

Need more information? Check out the documentation for more details to get a better understanding of the concepts and features of AGiXT.

Other Repositories

Check out the other AGiXT repositories at https://github.com/orgs/AGiXT/repositories - these include the AGiXT Streamlit Web UI, AGiXT Python SDK, AGiXT TypeScript SDK, AGiXT Dart SDK, AGiXT C# SDK, and more!

History

Star History Chart

Workflow

graph TD
    Start[Start] --> IA[Initialize Agent]
    IA --> IM[Initialize Memories]
    IM --> A[User Input]
    A --> B[Multi-modal Input Handler]
    B --> B1{Input Type?}
    B1 -->|Text| C[Process Text Input]
    B1 -->|Voice| STT[Speech-to-Text Conversion]
    B1 -->|Image| VIS[Vision Processing]
    B1 -->|File Upload| F[Handle file uploads]
    STT --> C
    VIS --> C
    F --> C
    C --> S[Log user input]
    C --> T[Log agent activities]
    C --> E[Override Agent settings if applicable]
    E --> G[Handle URLs and Websearch if applicable]
    G --> H[Data Analysis if applicable]
    H --> K{Agent Mode?}
    K -->|Command| EC[Execute Command]
    K -->|Chain| EX[Execute Chain]
    K -->|Prompt| RI[Run Inference]
    
    EC --> O[Prepare response]
    EX --> O
    RI --> O
    
    O --> Q[Format response]
    Q --> R[Text Response]
    R --> P[Calculate tokens]
    P --> U[Log final response]
    Q --> TTS[Text-to-Speech Conversion]
    TTS --> VAudio[Voice Audio Response]
    Q --> IMG_GEN[Image Generation]
    IMG_GEN --> GImg[Generated Image]
    
    subgraph HF[Handle File Uploads]
        F1[Download files to workspace]
        F2[Learn from files]
        F3[Update Memories]
        F1 --> F2 --> F3
    end
    
    subgraph HU[Handle URLs in User Input]
        G1[Learn from websites]
        G2[Handle GitHub Repositories if applicable]
        G3[Update Memories]
        G1 --> G2 --> G3
    end
    
    subgraph AC[Data Analysis]
        H1[Identify CSV content in agent workspace or user input]
        H2[Determine files or content to analyze]
        H3[Generate and verify Python code for analysis]
        H4[Execute Python code]
        H5{Execution successful?}
        H6[Update memories with results from data analysis]
        H7[Attempt code fix]
        H1 --> H2 --> H3 --> H4 --> H5
        H5 -->|Yes| H6
        H5 -->|No| H7
        H7 --> H4
    end
    
    subgraph IA[Agent Initialization]
        I1[Load agent config]
        I2[Initialize providers]
        I3[Load available commands]
        I4[Initialize Conversation]
        I5[Initialize agent workspace]
        I1 --> I2 --> I3 --> I4 --> I5
    end
    
    subgraph IM[Initialize Memories]
        J1[Initialize vector database]
        J2[Initialize embedding provider]
        J3[Initialize relevant memory collections]
        J1 --> J2 --> J3
    end
    
    subgraph EC[Execute Command]
        L1[Inject user settings]
        L2[Inject agent extensions settings]
        L3[Run command]
        L1 --> L2 --> L3
    end
    
    subgraph EX[Execute Chain]
        M1[Load chain data]
        M2[Inject user settings]
        M3[Inject agent extension settings]
        M4[Execute chain steps]
        M5[Handle dependencies]
        M6[Update chain responses]
        M1 --> M2 --> M3 --> M4 --> M5 --> M6
    end
    
    subgraph RI[Run Inference]
        N1[Get prompt template]
        N2[Format prompt]
        N3[Inject relevant memories]
        N4[Inject conversation history]
        N5[Inject recent activities]
        N6[Call inference method to LLM provider]
        N1 --> N2 --> N3 --> N4 --> N5 --> N6
    end

    subgraph WS[Websearch]
        W1[Initiate web search]
        W2[Perform search query]
        W3[Scrape websites]
        W4[Recursive browsing]
        W5[Summarize content]
        W6[Update agent memories]
        W1 --> W2 --> W3 --> W4 --> W5 --> W6
    end

    subgraph PR[Providers]
        P1[LLM Provider]
        P2[TTS Provider]
        P3[STT Provider]
        P4[Vision Provider]
        P5[Image Generation Provider]
        P6[Embedding Provider]
    end

    subgraph CL[Conversation Logging]
        S[Log user input]
        T[Log agent activities]
    end

    F --> HF
    G --> HU
    G --> WS
    H --> AC
    TTS --> P2
    STT --> P3
    VIS --> P4
    IMG_GEN --> P5
    J2 --> P6
    N6 --> P1

    F --> T
    G --> T
    H --> T
    L3 --> T
    M4 --> T
    N6 --> T

    style U fill:#0000FF,stroke:#333,stroke-width:4px

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agixt-1.6.18.tar.gz (84.0 MB view hashes)

Uploaded Source

Built Distribution

agixt-1.6.18-py3-none-any.whl (94.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page