A Large Language Model Fine-tuning package. The package uses a single line to fine-tune an LLM by taking care of all the boilerplate in the backend.
Project description
one-line-llm-tuner
Fine-tune a Large Language Model (LLM) in a single line.
Overview
one-line-llm-tuner
is a Python package designed to simplify the process of fine-tuning large language models (LLMs) like GPT-3. With just one line of code, you can adapt a pre-trained model to your specific dataset.
Features
- Simple API: Fine-tune models with minimal code.
- Supports Popular LLMs: Works with models from the
transformers
library, including GPT, BERT, and more. - Customizable: Advanced users can customize the fine-tuning process with additional parameters.
Installation
You can install one-line-llm-tuner
using pip:
pip install one-line-llm-tuner
Usage
The PyPI package can be used in the following way after installation.
from one_line_llm_tuner.tuner import llm_tuner
print(llm_tuner.fine_tune_model("train.txt", "Metric Coders is "))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for one_line_llm_tuner-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 098925f08d7f1855d04f081a8cd59fc766c7f1a26bf68ce8b319b7c5fc640afd |
|
MD5 | 5e235611e2e43eca28d7b90a48d6bd02 |
|
BLAKE2b-256 | da7ac971c3fad3bb98f83da5a4e5c592fd540094b5a5d953836d39f44e58f464 |