A Large Language Model Fine-tuning package. The package uses a single line to fine-tune an LLM by taking care of all the boilerplate in the backend.
Project description
one-line-llm-tuner
Fine-tune a Large Language Model (LLM) in a single line.
Overview
one-line-llm-tuner
is a Python package designed to simplify the process of fine-tuning large language models (LLMs) like GPT-3. With just one line of code, you can adapt a pre-trained model to your specific dataset.
Features
- Simple API: Fine-tune models with minimal code.
- Supports Popular LLMs: Works with models from the
transformers
library, including GPT, BERT, and more. - Customizable: Advanced users can customize the fine-tuning process with additional parameters.
Installation
You can install one-line-llm-tuner
using pip:
pip install one-line-llm-tuner
Usage
The PyPI package can be used in the following way after installation.
from one_line_llm_tuner.tuner import llm_tuner
fine_tune_obj = llm_tuner.FineTuneModel()
fine_tune_obj.fine_tune_model(input_file_path="train.txt")
fine_tune_obj.predict_text("Elon musk founded Spacex in ")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for one_line_llm_tuner-0.0.11.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 33c8c304b180b56c10aa7241feb051fc2787c95b1e335ad27e777e6cbd27fdc0 |
|
MD5 | a356420430315c32bddbb2b42f2fd11a |
|
BLAKE2b-256 | a3f88413bcf4cf35f4870a406e75a5177017ea9b8c1071d84df872240b54dccf |
Close
Hashes for one_line_llm_tuner-0.0.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f778c2c28b0d6a06a6a52f767ad51d4c91e60390ce8e4469b39c2ee7d24e5d4c |
|
MD5 | ce261c95ccfad2dc65b31827a33ff945 |
|
BLAKE2b-256 | 1fb9885a472bb8f59787e2ae972ff18fbe42502d1f253c62bf391e79c40a753f |