Skip to main content

ChatLLaMA: Open and Efficient Foundation Language Models Runnable In A Single GPU

Project description

ChatLLaMA

📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT

Meta has recently released LLaMA, a collection of foundational large language models ranging from 7 to 65 billion parameters. LLaMA is creating a lot of excitement because it is smaller than GPT-3 but has better performance. For example, LLaMA's 13B architecture outperforms GPT-3 despite being 10 times smaller. This new collection of fundamental models opens the door to faster inference performance and chatGPT-like real-time assistants, while being cost-effective and running on a single GPU.

However, LLaMA was not fine-tuned for instruction task with a Reinforcement Learning from Human Feedback (RLHF) training process.

The good news is that we introduce ChatLLaMA, the first open source implementation of LLaMA based on RLHF:

  • A complete open source implementation that enables you to build a ChatGPT-style service based on pre-trained LLaMA models.
  • Compared to the original ChatGPT, the training process and single-GPU inference are much faster and cheaper by taking advantage of the smaller size of LLaMA architectures.
  • ChatLLaMA has built-in support for DeepSpeed ZERO to speedup the fine-tuning process.
  • The library also supports all LLaMA model architectures (7B, 13B, 33B, 65B), so that you can fine-tune the model according to your preferences for training time and inference performance.
Screen Shot 2023-02-26 at 10 56 13 PM

Image from OpenAI’s blog.

Installation

pip install chatllama

Get started with ChatLLaMA

:warning: Please note this code represents the algorithmic implementation for RLHF training process of LLaMA and does not contain the model weights. To access the model weights, you need to apply to Meta's form.

ChatLLaMA allows you to easily train LLaMA-based architectures in a similar way to ChatGPT, using RLHF. For example, below is the code to start the training in the case of ChatLLaMA 7B.

from chatllama.rlhf.trainer import RLTrainer
from chatllama.rlhf.config import Config

path = "path_to_config_file.yaml"
config = Config(path=path)
trainer = RLTrainer(config.trainer)
trainer.distillate()
trainer.train()
trainer.training_stats.plot()

Note that you should provide Meta's original weights and your custom dataset before starting the fine-tuning process. Alternatively, you can generate your own dataset using LangChain's agents.

python generate_dataset.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

chatllama-0.0.4-py3-none-any.whl (49.9 kB view details)

Uploaded Python 3

File details

Details for the file chatllama-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: chatllama-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 49.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.6.8

File hashes

Hashes for chatllama-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 c5e4c4788dd761e7acc43fd07df239761b41157fef1aee9fc03660c426a97dba
MD5 7157fde923810421f78a58a30ca008a3
BLAKE2b-256 7a0347f137f5bb17395693d7bbb6fd687480ace64bb4fff6d8eee812d0b5dcf2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page