Llama trainer utility
Project description
🦙 Llama Trainer Utility
A "just few lines of code" utility for fine-tuning (not only) Llama models.
To install:
pip install llama-trainer
Training and Inference
Training
from llama_trainer import LlamaTrainer
from datasets import load_dataset
dataset = load_dataset("timdettmers/openassistant-guanaco")
# define your instruction-based sample
def to_instruction_fn(sample):
return sample["text"]
formatting_func = to_instruction_fn
output_dir = "llama-2-7b-hf-finetune"
llama_trainer = LlamaTrainer(
model_name="meta-llama/Llama-2-7b-hf",
dataset=dataset,
formatting_func=formatting_func,
output_dir=output_dir
)
llama_trainer.train()
Inference
from llama_trainer import LlamaInfer
import transformers as tr
llama_infer = LlamaInfer(output_dir)
prompt = "### Human: Give me some output!### Assistant:"
print(llama_infer(prompt))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llama-trainer-0.2.0.tar.gz
(11.6 kB
view hashes)
Built Distribution
Close
Hashes for llama_trainer-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a6ee5b9b61490a4a8dac093023ea02e6a987ec2ea4b86a78c7c263197e03826 |
|
MD5 | 953ae554837bfd94d2b60a5d549364a7 |
|
BLAKE2b-256 | 45b08c626d29ab21ddcaa3260bd010fff9b0eec5742e9ef7185310084b9cc313 |