Skip to main content

Updated and improved implementation of the self-instruct system.

Project description

airoboros: using large language models to fine-tune large language models

This is my take on implementing the Self-Instruct paper. The approach is quite heavily modified, and does not use any human-generated seeds.

This updated implementation supports either the /v1/completions endpoint or /v1/chat/completions, which is particularly useful in that it supports gpt-4 and gpt-3.5-turbo (which is 1/10 the cost of text-davinci-003).

Huge thank you to the folks over at a16z for sponsoring the costs associated with building models and associated tools!

Key differences from self-instruct/alpaca

  • support for either /v1/completions or /v1/chat/completions APIs (which allows gpt-3.5-turbo instead of text-davinci-003, as well as gpt-4 if you have access)
  • support for custom topics list, custom topic generation prompt, or completely random topics
  • in-memory vector db (Chroma) for similarity comparison, which is much faster than calculating rouge score for each generated instruction
  • (seemingly) better prompts, which includes injection of random topics to relate the instructions to, which creates much more diverse synthetic instructions
  • asyncio producers with configurable batch size
  • several "instructors", each targetting specific use-cases, such as Orca style reasoning/math, role playing, etc.
  • tries to ensure the context, if provided, is relevant to the topic and contains all the information that would be necessary to respond to the instruction, and nost just a link to article/etc.
  • generally speaking, this implementation tries to reduce some of the noise

Goal of this project

Problem and proposed solution:

  • Models can only ever be as good as the data they are trained on.
  • High quality data is difficult to curate manually, so ideally the process can be automated by AI/LLMs.
  • Large models (gpt-4, etc.) are pricey to build/run and out of reach for individuals/small-medium business, and are subject to RLHF bias, censorship, and changes without notice.
  • Smaller models (llama-2-70b, etc.) can reach somewhat comparable performance in specific tasks to much larger models when trained on high quality data.
  • The airoboros tool allows building datasets that are focused on specific tasks, which can then be used to build a plethora of individual expert models. This means we can crowdsource building experts.
  • Using either a classifier model, or simply calculating vector embeddings for each item in the dataset and using faiss index/cosine similarity/etc. search, incoming requests can be routed to a particular expert (e.g. dynamically loading LoRAs) to get extremely high quality responses.

Progress:

  • ✅ PoC that training via self-instruction, that is, datasets generated from language models, works reasonably well.
  • ✅ Iterate on the PoC to use higher quality prompts, more variety of instructions, etc.
  • ✅ Split the code into separate "instructors", for specializing in any particular task (creative writing, songs, roleplay, coding, execution planning, function calling, etc.)
  • [in progress]: PoC that an ensemble of LoRAs split by the category (i.e., the instructor used in airoboros) has better performance than the same param count model tuned on all data
  • [in progress]: Remove the dependency on OpenAI/gpt-4 to generate the training data so all datasets can be completely free and open source.
  • [future]: Automatic splitting of experts at some threshold, e.g. "coding" is split into python, js, golang, etc.
  • [future]: Hosted service/site to build and/or extend datasets or models using airoboros.
  • [future]: Depending on success of all of the above, potentially a hosted inference option with an exchange for private/paid LoRAs.

LMoE

LMoE

LMoE is the simplest architecture I can think of for a mixture of experts. It doesn't use a switch transformer, doesn't require slicing and merging layers with additional fine-tuning, etc. It just dynamically loads the best PEFT/LoRA adapter model based on the incoming request.

By using this method, we can theoretically crowdsource generation of dozens (or hundreds/thousands?) of very task-specific adapters and have an extremely powerful ensemble of models with very limited resources on top of a single base model (llama-2 7b/13b/70b).

Tuning the experts

The self-instruct code contained within this project uses many different "instructors" to generate training data to accomplish specific tasks. The output includes the instructor/category that generated the data. We can use this to automatically segment the training data to fine-tune specific "experts".

See scripts/segment_experts.py for an example of how the training data can be segmented, with a sampling of each other expert in the event of misrouting.

See scripts/tune_expert.py for an example of creating the adapter models (with positional args for expert name, model size, etc.)

NOTE: this assumes use of my fork of qlora https://github.com/jondurbin/qlora

Routing requests to the expert

The "best" routing mechanism would probably be to train a classifier based on the instructions for each category, with the category/expert being the label, but that prohibits dynamic loading of new experts. By using a faiss index containing the embeddings of the system prompt + instruction found within the training data, we can load any expert on the fly with zero downtime/no additional fine-tuning of a classifier.

It's not perfect, but so long as we include a small sampling of other types of instructions in each expert's fine-tuning data, misrouting shouldn't be too much of a problem.

Running the API server

First, download the base llama-2 model for whichever model size you want, e.g.: llama-2-7b-hf

Next, download the LMoE package that corresponds to that base model, e.g.: airoboros-lmoe-7b-2.1

NOTE: 13b also available, 70b in progress

Here's an example command to start the server:

python -m airoboros.lmoe.api \
  --base-model ./llama-2-7b-hf \
  --lmoe ./airoboros-lmoe-7b-2.1 \
  --router-max-samples 1000 \
  --router-k 25 \
  --port 8000 \
  --host 127.0.0.1

This uses flash attention (an older version, had issues with matrix dim issues in v2.0.8+). To use without flash_attn, use python -m airoboros.lmoe.hf_api

Once started, you can infer using the same API scheme you'd query OpenAI API with, e.g.:

curl -H 'content-type: application/json' http://127.0.0.1:8000/v1/chat/completions -d '
{
  "model": "llama-2-7b-hf",
  "temperature": 0.7,
  "max_tokens": 2048,
  "messages": [
    {
      "role": "system",
      "content": "A chat."
    },
    {
      "role": "user",
      "content": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?"
    }
  ]
}'

I've also added an vllm-based server, but the results aren't quite as good (not sure why yet). To use it, make sure you install vllm and fschat, or pip install airoboros[vllm]

python -m airoboros.lmoe.vllm \
  --model ./llama-2-7b-hf \
  --lmoe-path ../airoboros-lmoe-7b-2.1 \
  --router-max-samples 100 \
  --router-k 25 \
  --port 8000 \
  --host 127.0.0.1

Generating instructions

NEW - 2023-07-18

To better accomodate the plethora of options, the configuration has been moved to a YAML config file.

Please create a copy of example-config.yaml and configure as desired.

Once you have the desired configuration, run:

airoboros generate-instructions --config-path /path/to/config.yaml

Generating topics

NEW - 2023-07-18

Again, this is now all YAML configuration based! Please create a customized version of the YAML config file, then run:

airoboros generate-topics --config-path /path/to/config.yaml

You can override the topic_prompt string in the configuration to use a different topic generation prompt.

Support the work

https://bmc.link/jondurbin

ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11

BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf

Models (research use only):

gpt-4 versions

llama-2 base model

2.1 dataset

2.0/m2.0

Previous generation (1.4.1 dataset)

original llama base model

Latest version (2.0 / m2.0 datasets)

Previous generation (1.4.1 dataset)

mpt-30b base model

gpt-3.5-turbo versions

Datasets

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

airoboros-2.1.6.tar.gz (80.6 kB view details)

Uploaded Source

Built Distribution

airoboros-2.1.6-py3-none-any.whl (107.5 kB view details)

Uploaded Python 3

File details

Details for the file airoboros-2.1.6.tar.gz.

File metadata

  • Download URL: airoboros-2.1.6.tar.gz
  • Upload date:
  • Size: 80.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for airoboros-2.1.6.tar.gz
Algorithm Hash digest
SHA256 712279611bdf2970cbd1c572d32ad7b00519d6c574b3b444b322bf9381e5415f
MD5 f5701d65d0d4624209a588ae521b29f5
BLAKE2b-256 b71026a2dc347a3f0b06c2c18ff5927ddd2d04cae62b356ecaa815ecf6384ac5

See more details on using hashes here.

File details

Details for the file airoboros-2.1.6-py3-none-any.whl.

File metadata

  • Download URL: airoboros-2.1.6-py3-none-any.whl
  • Upload date:
  • Size: 107.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for airoboros-2.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 33d7e3ce4c34c8089a0f13ec466f2226f134c9833be4a92a2cba812832508d65
MD5 67e9911162a697c4d755128f7ef78e48
BLAKE2b-256 16361f4aae162a346e671c178a8f062d4ad8199edf429613f7d7640dcb435422

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page