Skip to main content

An Open-Source AGI Server for Open-Source LLMs

Project description

Anarchy Logo

๐Ÿค– Anarchy LLM-VM ๐Ÿค–

An Open-Source AGI Server for Open-Source LLMs

This is Anarchy's effort to build ๐Ÿ—๏ธ an open generalized artificial intelligence ๐Ÿค– through the LLM-VM: a way to give your LLMs superpowers ๐Ÿฆธ and superspeed ๐Ÿš„.

You can find detailed instructions to try it live here: anarchy.ai

This project is in BETA. Expect continuous improvement and development.

Table of Contents

๐Ÿ“š About ๐Ÿ“š

๐Ÿ’ What is the Anarchy LLM-VM?

The Anarchy LLM-VM is a highly optimized and opinionated backend for running LLMs with all the modern features we've come to expect from completion: tool usage, persistent stateful memory, live data augmentation, data and task fine-tuning, output templating, a web playground, API endpoints, student-teacher distillation, data synthesis, load-balancing and orchestration, large context-window mimicry.

Formally, it is a virtual machine/interpreter for human language, coordinating between data, models (CPU), your prompts (code), and tools (IO).

By doing all these things in one spot in an opinionated way, the LLM-VM can properly optimize batch calls that would be exorbitantly expensive with distributed endpoints. It furthermore strives for both model and architecture agnosticism, properly optimizing the chosen model for the current architecture.

๐ŸคŒ Why use the Anarchy LLM-VM?

In line with Anarchy's mission, the LLM-VM strives to support open-source models. By utilizing open-source models and running them locally you achieve several benefits:

  • Speed up your AGI development ๐Ÿš€: With AnarchyAI, one interface is all you need to interact with the latest LLMs available.

  • Lower your costs ๐Ÿ’ธ: Running models locally can reduce the pay-as-you-go costs of development and testing.

  • Flexibility ๐Ÿง˜โ€โ™€๏ธ: Anarchy allows you to rapidly switch between popular models so you can pinpoint the exact right tool for your project.

  • Community Vibes ๐Ÿซ‚: Join our active community of highly motivated developers and engineers working passionately to democratize AGI

  • WYSIWYG ๐Ÿ‘€: Open source means nothing is hidden; we strive for transparency and efficiency so you can focus on building.

๐ŸŽ Features and Roadmap

  • Implicit Agents ๐Ÿ”ง๐Ÿ•ต๏ธ: The Anarchy LLM-VM can be set up to use external tools through our agents such as REBEL just by supplying tool descriptions!

  • Inference Optimization ๐Ÿš„: The Anarchy LLM-VM is optimized from the agent level all the way to assembly on known LLM architectures to get the most bang for your buck. With state-of-the-art batching, sparse inference and quantization, distillation, and multi-level colocation, we aim to provide the fastest framework available.

  • Task Auto-Optimization ๐Ÿš…: The Anarchy LLM-VM will analyze your use cases for repetitive tasks where it can activate student-teacher distillation to train a super-efficient small model from a larger more general model without losing accuracy. It can furthermore take advantage of data-synthesis techniques to improve results.

  • Library Callable ๐Ÿ“š: We provide a library that can be used from any Python codebase directly.

  • HTTP Endpoints ๐Ÿ•ธ๏ธ: We provide an HTTP standalone server to handle completion requests.

  • Live Data Augmentation ๐Ÿ“Š: (ROADMAP) You will be able to provide a live updating data set and the Anarchy LLM-VM will fine-tune your models or work with a vector DB to provide up-to-date information with citations

  • Web Playground ๐Ÿ›: (ROADMAP) You will be able to run the Anarchy LLM-VM and test its outputs from the browser.

  • Load-Balancing and Orchestration โš–๏ธ: (ROADMAP) If you have multiple LLMs or providers you'd like to utilize, you will be able to hand them to the Anarchy LLM-VM to automatically figure out which to work with and when to optimize your uptime or your costs

  • Output Templating ๐Ÿคต: (ROADMAP) You can ensure that the LLM only outputs data in specific formats and fills in variables from a template with either regular expressions, LMQL, or OpenAI's template language

  • Persistent Stateful Memory ๐Ÿ“: (ROADMAP) The Anarchy LLM-VM can remember a user's conversation history and react accordingly

๐Ÿš€ Quickstart ๐Ÿš€

๐Ÿฅน Requirements

Installation Requirements

Python >=3.10 Supported. Older versions of Python are on a best-effort basis.

Use bash > python3 --version to check what version you are on.

To upgrade your python, either create a new python env using bash > conda create -n myenv python=3.10 or go to https://www.python.org/downloads/ to download the latest version.

 If you plan on running the setup steps below, a proper Python version will be installed for you

System Requirements

Different models have different system requirements. Limiting factors on most systems will likely be RAM, but many functions will work at even 16 GB of RAM.

That said, always lookup the information about the models you're using, they all have different sizes and requirements in memory and compute resources.

๐Ÿ‘จโ€๐Ÿ’ป Installation

The quickest way to get started is to run pip install llm-vm in your Python environment.

Another way to install the LLM-VM is to clone this repository and install it with pip like so:

> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> ./setup.sh

The above bash script setup.sh only works for MacOS and Linux.

Alternatively you could do this:

> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> python -m venv <name>
> source <name>/bin/activate
> python -m pip install -e ."[dev]"

One Last Step, almost there!

If you're using one of the OpenAI models, you will need to set the LLM_VM_OPENAI_API_KEY environment variable with your API key.

โœ… Generating Completions

Our LLM-VM gets you working directly with popular LLMs locally in just 3 lines. Once you've installed it (as above), just load your model and start generating!

# import our client
from llm_vm.client import Client

# Select which LLM you want to use, here we have OpenAI 
client = Client(big_model = 'chat_gpt')

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '', openai_key = 'ENTER_YOUR_API_KEY')
print(response)
# Anarchy is a political ideology that advocates for the absence of government...

๐Ÿƒโ€โ™€ Running LLMs Locally

# import our client
from llm_vm.client import Client

# Select the LlaMA 2 model
client = Client(big_model = 'llama2')

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '')
print(response)
# Anarchy is a political philosophy that advocates no government...

๐Ÿ˜Ž Supported Models

Select from the following models

Supported_Models = ['chat_gpt','gpt','neo','llama2','bloom','opt','pythia']

โ˜ฏ Picking Different Models

LLM-VM default model sizes for local models are intended to make experimentation with LLMs accessible to everyone, but if you have the memory required, larger parameter models will perform far better!

for example, if you want to use a large and small neo model for your teacher and student, and you have enough RAM:

# import our client
from llm_vm.client import Client

# Select the LlaMA model
client = Client(big_model = 'neo', big_model_config={'model_uri':'EleutherAI/gpt-neox-20b'}, 
                small_model ='neo', small_model_config={'model_uri':'EleutherAI/gpt-neox-125m'})

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '')
print(response)
# Anarchy is a political philosophy that advocates no government...

Here are some default model's details:

Name Model_Uri Model params Checkpoint file size
Neo EleutherAI/gpt-neo-1.3B 1.3B 5.31 GB
Bloom bigscience/bloom-560m 1.7B 1.12 GB
OPT facebook/opt-350m 350m 622 MB

For some other choices of memory usage and parameter count in each model family, check out the tables model_uri_tables.

๐Ÿ›  Tool Usage

There are two agents: FLAT and REBEL.

Run the agents separately by going into the src/llm_vm/agents/<AGENT_FOLDER> and running the file that is titled agent.py.

Alternatively, to run a simple interface and choose an agent to run from the CLI, run the src/llm_vm/agents/agent_interface.py file and follow the command prompt instructions.

๐Ÿฉท Contributing ๐Ÿฉท

We welcome contributors! To get started is to join our active discord community. Otherwise here are some ways to contribute and get paid:

Jobs

  • We're always looking for serious hackers. Prove that you can build and creatively solve hard problems and reach out!
  • The easiest way to secure a job/internship with us is to submit pull requests that address or resolve open issues.
  • Then, you can apply directly here https://forms.gle/bUWDKW3cwZ8n6qsU8

Bounty

We offer bounties for closing specific tickets! Look at the ticket labels to see how much the bounty is. To get started, join the discord and read the guide

๐Ÿ™ Acknowledgements ๐Ÿ™

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_vm-0.1.55.tar.gz (70.7 kB view details)

Uploaded Source

Built Distribution

llm_vm-0.1.55-py3-none-any.whl (79.6 kB view details)

Uploaded Python 3

File details

Details for the file llm_vm-0.1.55.tar.gz.

File metadata

  • Download URL: llm_vm-0.1.55.tar.gz
  • Upload date:
  • Size: 70.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for llm_vm-0.1.55.tar.gz
Algorithm Hash digest
SHA256 e72f05eca38a79d3b1b3c57844eec9a712f30e724217aa33bbffba59304e6c8d
MD5 3c0ccd596162e669a73b6d4464f8a4f8
BLAKE2b-256 e943d5ebcb9ff453c2cf7d02ff2ef3dd64a7946e85f763478078262ba7c97318

See more details on using hashes here.

File details

Details for the file llm_vm-0.1.55-py3-none-any.whl.

File metadata

  • Download URL: llm_vm-0.1.55-py3-none-any.whl
  • Upload date:
  • Size: 79.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for llm_vm-0.1.55-py3-none-any.whl
Algorithm Hash digest
SHA256 971b95dca620725b9ac02f7f1b0789c0780334cf9fea66b6a35fe2b599b2d02b
MD5 7e63d5f465680f49626de825935aa1f5
BLAKE2b-256 2db88de0fc7c8ccd1d147c4fab7f11a36423ce7b173e82f7906997541aa07d9d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page