Skip to main content

An Artificial Intelligence Automation Platform. AI Instruction management from various providers, has an adaptive memory, and a versatile plugin system with many commands including web browsing. Supports many AI providers and models and growing support every day.

Project description

Agent-LLM (Large Language Model)

RELEASE STATUS LICENSE: MIT DOCKER CODESTYLE

GitHub

Contribute Contribute Discord Twitter FacebookGroup EMail

Please use the outreach email for media, sponsorship, or to contact us for other miscellaneous purposes.

Do not send us emails with troubleshooting requests, feature requests or bug reports, please direct those to GitHub Issues or Discord.

Agent-LLM is an Artificial Intelligence Automation Platform designed to power efficient AI instruction management across multiple providers. Our agents are equipped with adaptive memory, and this versatile solution offers a powerful plugin system that supports a wide range of commands, including web browsing. With growing support for numerous AI providers and models, Agent-LLM is constantly evolving to empower diverse applications.

image

image

⚠️ Run this in Docker or a Virtual Machine!

You're welcome to disregard this message, but if you do and the AI decides that the best course of action for its task is to build a command to format your entire computer, that is on you. Understand that this is given full unrestricted terminal access by design and that we have no intentions of building any safeguards. This project intends to stay light weight and versatile for the best possible research outcomes.

See also [SECURITY.md](1-Getting started/SECURITY.MD)

⚠️ Monitor Your Usage!

Please note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We're NOT responsible for your usage under any circumstance.

⚠️ Under Development!

This project is under active development and may still have issues. We appreciate your understanding and patience. If you encounter any problems, please first check the open issues. If your issue is not listed, kindly create a new issue detailing the error or problem you experienced. Thank you for your support!

⚠️ Necessities For Use

Agent-LLM brings you great power, but you will need to make sure you have the necessary knowledge and hardware to use it. You cannot simply dive in face first and expect to find any success, research and understanding of the technologies involved is required.

Knowledge Required

You will need at minimum intermediate level knowledge in the following areas:

  • Docker
  • Python
  • Large Language Models

We unfortunately cannot support Docker issues or issues running any local models. There is a bit of a learning curve to get into this stuff and we're focused on development, not support.

We cannot teach you how to use Docker or Python, you will need to refer to the documentation or ask an AI to help you. Please do not open issues for a lack of knowledge in these areas, they will be closed asking you to refer to the documentation.

Hardware Required

Good luck getting a straight answer! Due to the nature of Agent-LLM, you can run it from a mobile phone or from an enterprise grade AI server. If you're running your agents with OpenAI as the provider, you can run it on just about anything with an API key, enough storage, and an internet connection.

The hardware you need will depend on the AI models you want to run and the number of agents you want to run at the same time. We recommend starting with a single agent and a single AI model and then scaling up from there.

Please do not open issues for lack of hardware, this means errors related to hitting token limits on local models, running out of memory, and issues directly related to ANY local providers such as Oobaboooga, llama.cpp, etc. We know that the providers work as they've been tested and confirmed working, if they're not working on your hardware, it's a problem with your hardware most likely.

Operating Systems

The development environment used when building Agent-LLM is Ubuntu 22.04. As far as we're aware, it should run on any Linux-based OS, MacOS, and Windows as long as hardware requirements are met.

We cannot support Windows related issues. Windows has firewalls and things working against developers actively which is why we do not use it for development (or any other thing.) There are people in our Discord server that are actively using Agent-LLM on Windows, MacOS, and Linux.

If you have issues with Windows, please ask in Discord, but please do not tag the developers to ask, we don't use it.

Table of Contents 📖

Media Coverage ⏯️

Video

Key Features 🗝️

  • Adaptive Memory Management: Efficient long-term and short-term memory handling for improved AI performance.
  • Versatile Plugin System: Extensible command support for various AI models, ensuring flexibility and adaptability.
  • Multi-Provider Compatibility: Seamless integration with leading AI providers, including OpenAI GPT series, Hugging Face Huggingchat, GPT4All, GPT4Free, Oobabooga Text Generation Web UI, Kobold, llama.cpp, FastChat, Google Bard, Bing, and more. Run any model with Agent-LLM!
  • Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience.
  • Code Evaluation: Robust support for code evaluation, providing assistance in programming tasks.
  • Docker Deployment: Effortless deployment using Docker, simplifying setup and maintenance.
  • Audio-to-Text Conversion: Integration with Hugging Face for seamless audio-to-text transcription.
  • Platform Interoperability: Easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.
  • Text-to-Speech Options: Multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.
  • Expanding AI Support: Continuously updated to include new AI providers and services.
  • AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings.
  • Flexible Chat Interface: User-friendly chat interface for conversational and instruction-based tasks.
  • Task Execution: Efficient starting, stopping, and monitoring of AI agent tasks with asynchronous execution.
  • Chain Management: Sophisticated management of multi-agent task chains for complex workflows and collaboration.
  • Custom Prompts: Easy creation, editing, and deletion of custom prompts to standardize user inputs.
  • Command Control: Granular control over agent abilities through enabling or disabling specific commands.
  • RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.

Web Application Features

The frontend web application of Agent-LLM provides an intuitive and interactive user interface for users to:

  • Manage agents: View the list of available agents, add new agents, delete agents, and switch between agents.
  • Set objectives: Input objectives for the selected agent to accomplish.
  • Start tasks: Initiate the task manager to execute tasks based on the set objective.
  • Instruct agents: Interact with agents by sending instructions and receiving responses in a chat-like interface.
  • Available commands: View the list of available commands and click on a command to insert it into the objective or instruction input boxes.
  • Dark mode: Toggle between light and dark themes for the frontend.
  • Built using NextJS and Material-UI
  • Communicates with the backend through API endpoints

Run with Docker

Clone the repositories for the Agent-LLM front/back ends then start the services with Docker.

Linux or Windows

git clone https://github.com/Josh-XT/Agent-LLM
cd Agent-LLM

Choose a service you want to run using profiles, e.g. docker compose --profile streamlit up

Run all available services docker compose --profile all up

Windows Docker Desktop (streamlit only example)

On Windows Docker Desktop

Development using docker

docker compose --profile all -f docker-compose.yml -f docker-compose.dev.yaml up
  • mounts dev space into container - happy building

Manual Install from source (unsupported)

As a reminder, this can be dangerous to run locally depending on what commands you give your agents access to. ⚠️ Run this in Docker or a Virtual Machine!

Back End

Clone the repository for the Agent-LLM back end and start it.

git clone https://github.com/Josh-XT/Agent-LLM
cd Agent-LLM
pip install -r requirements.txt
python app.py

Front End

Clone the repository for the Agent-LLM front end in a separate terminal and start it.

git clone https://github.com/JamesonRGrieve/Agent-LLM-Frontend --recurse-submodules 
cd Agent-LLM-Frontend
yarn install
yarn dev

Access the web interface at http://localhost:3000

Configuration

Agent-LLM utilizes a .env configuration file to store AI language model settings, API keys, and other options. Use the supplied .env.example as a template to create your personalized .env file. Configuration settings include:

  • WORKING_DIRECTORY: Set the agent's working directory.
  • EXTENSIONS_SETTINGS: Configure settings for OpenAI, Hugging Face, Selenium, Twitter, and GitHub.
  • VOICE_OPTIONS: Choose between Brian TTS, Mac OS TTS, or ElevenLabs for text-to-speech.

For a detailed explanation of each setting, refer to the .env.example file provided in the repository.

API Endpoints

Agent-LLM provides several API endpoints for managing agents, prompts and chains.

To learn more about the API endpoints and their usage, visit the API documentation at

This documentation is hosted locally and the frontend must be running for these links to work.

Extending Functionality

Updating Requirements

When extending functionality ensure to perform the following inside the top level Agent-LLM directory after saving your changes / customizations:

pip install pipreqs
pipreqs ./ --savepath gen_requirements.txt --ignore bin,etc,include,lib,lib64,env,venv
pip install --no-cache-dir -r gen_requirements.txt

This will generate an updated requirements file, and install the new dependencies required to support your modifications.

Commands

To introduce new commands, generate a new Python file in the commands folder and define a class inheriting from the Commands class. Implement the desired functionality as methods within the class and incorporate them into the commands dictionary.

AI Providers

Each agent will have its own AI provider and provider settings such as model, temperature, and max tokens, depending on the provider. You can use this to make certain agents better at certain tasks by giving them more advanced models to complete certain steps in chains.

Documentation

In docs/ folder. Can be used to generate static html output. See deploy-docs section in Publish Workflow howto build with honkit.

Contributing

Contribute

We welcome contributions to Agent-LLM! If you're interested in contributing, please check out our contributions guide the open issues on the backend, open issues on the frontend and pull requests, submit a pull request, or suggest new features. To stay updated on the project's progress, Twitter, Twitter and Twitter. Also feel free to join our Discord.

Donations and Sponsorships

We appreciate any support for Agent-LLM's development, including donations, sponsorships, and any other kind of assistance. If you would like to support us, please contact us through our EMail , Discord or Twitter.

We're always looking for ways to improve Agent-LLM and make it more useful for our users. Your support will help us continue to develop and enhance the application. Thank you for considering to support us!

Our Team 🧑‍💻

Josh (@Josh-XT) James (@JamesonRGrieve)
GitHub GitHub
Twitter Twitter
LinkedIn LinkedIn

Acknowledgments

This project was inspired by and is built using code from the following open-source repositories:

Please consider exploring and contributing to these projects if you like what we are doing.

History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent-llm-1.1.41b0.tar.gz (45.6 kB view hashes)

Uploaded Source

Built Distribution

agent_llm-1.1.41b0-py3-none-any.whl (46.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page