Package for training and deploying doctrinally correct LLMs.
Project description
AI for the Church
Modern LLMs are rooted in secular value systems that are often misaligned with religious organisations. This PyPI package allows anyone to train and deploying doctrinally correct LLMs based on Llama2. Effectively, we are aligning models to a set of values.
Model fine-tuning
from aiforthechurch import align_llama2
doctrinal_dataset = "/path/to/csv"
align_llama2(doctrinal_dataset)
aiforthechurch
is integrated with HuggingFace shuch that the aligned model will be automatically pushed to your HuggingFace repo of choice at the end of the training.
At aiforthechurch.org we provide tools for generating doctrinal datasets, a few examples are available at huggingface.co/AiForTheChurch.
Model inference and deployment
We implemented an inference API in the same format as OpenAI's.
import aiforthechurch
aiforthechurch.Completion.create(denomination="catholic", message="Does Jesus love me?")
There is also an asynchronous streaming API, just set stream=True
.
And you can use our code to create your own inference server with the models
you train by editing the MODELS
dictionary in
gloohack/deployment/prod_models.py
with a denomination and a path to your
model in huggingface. Start the server by running the following command on your
machine:
python gloohack/deployment/inference.py
Model training requirements
If you wish to train your models using this repo you will need access to a machine with over 16GB of GPU memory and 30GB RAM. The full model weights for Llama2-7B amount to almost 30GB, but we use parameter-efficient fine-tuning (PEFT) LoRA to save memory and avoid any catastrophic forgetting during the fine-tuning procedure.
References
We leaned heavily on open-source libraries like transformers
, peft
, and
bitsandbytes
for this project.
- Dettmers, Tim, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. "LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale." arXiv preprint arXiv:2208.07339.
- Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. "LoRA: Low-Rank Adaptation of Large Language Models." arXiv preprint arXiv:2106.09685.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aiforthechurch-0.5.tar.gz
.
File metadata
- Download URL: aiforthechurch-0.5.tar.gz
- Upload date:
- Size: 2.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 197731f022ab1543e6283734fd4b821dceddf8086638da517bea285e725f7fe6 |
|
MD5 | 180d6068a6e931e8bdd812750c48f1d1 |
|
BLAKE2b-256 | c0c94d373b941ac385ac730bc09d0f947ce9334fa8b6c2e1eb956f3915e56cc1 |
File details
Details for the file aiforthechurch-0.5-py3-none-any.whl
.
File metadata
- Download URL: aiforthechurch-0.5-py3-none-any.whl
- Upload date:
- Size: 2.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 78877f555f8a1d82a78a97e71661feb8da64b0a0849c6b7c79bd0acb9e7b75b3 |
|
MD5 | 389cb9910bbb4d5ba2bd7f0b00097d46 |
|
BLAKE2b-256 | 93d125c2b73d0425111fb9519e679166a4ccdf8985a46ae01ac3d65e9069aecf |