A Large Language Model Fine-tuning package. The package uses a single line to fine-tune an LLM by taking care of all the boilerplate in the backend.
Project description
one-line-llm-tuner
Fine-tune a Large Language Model (LLM) in a single line.
Overview
one-line-llm-tuner
is a Python package designed to simplify the process of fine-tuning large language models (LLMs) like GPT-3. With just one line of code, you can adapt a pre-trained model to your specific dataset.
Features
- Simple API: Fine-tune models with minimal code.
- Supports Popular LLMs: Works with models from the
transformers
library, including GPT, BERT, and more. - Customizable: Advanced users can customize the fine-tuning process with additional parameters.
Installation
You can install one-line-llm-tuner
using pip:
pip install one-line-llm-tuner
Usage
The PyPI package can be used in the following way after installation.
from one_line_llm_tuner.tuner import llm_tuner
print(llm_tuner.fine_tune_model("train.txt", "Metric Coders is "))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for one_line_llm_tuner-0.0.10.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f0680d5fcbda7dae41c0181ebc3ba765f8c91c7b8c29e6b954d7ff9699e808e6 |
|
MD5 | 081a8116f5ff4f5f7c9b612678af6c95 |
|
BLAKE2b-256 | 29e6c94b03e20a020ac5b48bfb88dfbd81d20a055bee2f7c5958e6fda0401500 |
Close
Hashes for one_line_llm_tuner-0.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 05cad855c33a52efdc26a0d0b4c1d652bee0eb9722a5dedb0202fcbbece4b4bd |
|
MD5 | cb8957cdbb80f44e9b2e9ee80a8dcb72 |
|
BLAKE2b-256 | ffd235a7a63c2776fe2d541bdf0c3fe76e9b2c4fe5eaae3c505448416a401b03 |