No project description provided
Project description
hakkero-dataloader
A general dataloader build on top of Pytorch Dataloader.
1. How to use
1.1 Build Index
Install pip install hakkero-dataloader
and run the following command to build index.
hakkero -h
usage: hakkero [-h] --filename FILENAME [--output OUTPUT]
build index for dataset
options:
-h, --help show this help message and exit
--filename FILENAME full filename of jsonl file
--output OUTPUT output path for saving data.jsonl and index.h5
1.2 Use In Training
from hakkero.dataset import get_dataset
# pretrain or sft
from hakkero.dataset import PadLoader
from hakkero.dataset import UnpadLoader
# preference
from hakkero.dataset import PreferencePadLoader
from hakkero.dataset import PreferenceUnpadLoader
dp_world_size, dp_rank = 1, 0
tokenizer = ...
batch_size = 4
max_length = 4096
n_workers = 2
dataset = get_dataset(
config="/path/to/dataset",
tokenizer=tokenizer,
num_epochs=-1,
max_length=max_length,
homogeneous=True,
seed=9527,
rank=dp_rank,
world_size=dp_world_size,
n_workers=n_workers,
# segment and tokenize strategy or set them in `config` and let strategy_segment=None and strategy_tokenize=None:
st_segment="naive",
st_tokenize="legacy",
# add bos/eos token for legacy tokenize strategy
add_bos_token=True,
add_eos_token=True,
)
dataloader = UnpadLoader(dataset, max_total_length=batch_size * max_length)
prefetcher = dataloader.prefetch(n_workers)
for step, batch in enumerate(prefetcher, start=0):
print(batch)
example of config
:
{
"hermes25_1":
{
"group": "en",
"name": "hermes25_1",
"epoch": 1,
"path": "hermes25",
"strategy":
{
"st_segment": "integrous",
"st_tokenize": "hg"
},
"weight": 0.5
},
"hermes25_2":
{
"group": "en",
"name": "hermes25_1",
"epoch": 1,
"path": "hermes25",
"strategy":
{
"st_segment": "integrous",
"st_tokenize": "hg"
},
"weight": 0.5
}
}
2. Supported Strategies
See segmentation.py and tokenization.py for more details.
2.1 Segmentation Strategies
integrous
: discard sample that is too long, exceedmax_length
concat
: split long sample, concat it with previous segment, shuffle all segments- not support preference data.
naive
: split long sample with random length, shuffle all segments- not support preference data.
unbiased
: split long sample exceedmax_length
with random length, shuffle all segments.- not support preference data.
2.2 Tokenization Strategies
-
legacy
:\n\n
as delimiter to join text and usetokenizer.encode
to encode the input.-
format of input data
{ "uid": "xxx", "data": { "title": "xxx", "summary": "xxx", "abstract": "xxx", "text": "xxx", "question": "xxx", "answer": "xxx", "code": "xxx", "label": "xxx" } }
- All fields except
label
are stripped and joined with "\n\n" as the context. label
is the target to learn for finetuning (pretrain data should not have thelabel
field).- See func
legacy
in tokenization.py for more details.
- All fields except
-
extra parameters:
add_bos_token
,add_eos_token
-
-
hg
: huggingface message data, usetokenizer.apply_chat_template
to encode the input.-
format of input data
{ "uid": "xx", "data": [ {"role": "user", "content": "xxx"}, {"role": "assistant", "content": "xxx"}, ... ] }
See func
huggingface_message
in tokenization.py for more details.
-
-
chatml
: chat message data, use chatml to encode the input.-
format of input data
{ "uid": "xx", "data": [ {"role": "user", "content": "xxx"}, {"role": "assistant", "content": "xxx"}, ... ] }
See func
chatml_message
in tokenization.py for more details.
-
-
hg_preference
: preference data, usetokenizer.apply_chat_template
to encode the input.-
format of input data
{ "uid": "xx", "data": { "context": [ {"role": "user", "content": "xxx"}, {"role": "assistant", "content": "xxx"}, ... {"role": "user", "content": "xxx"} ], "chosen": "chosen response", "rejected": "rejected response" } }
See func
huggingface_preference
in tokenization.py for more details.
-
-
chatml_preference
: preference data, use chatml to encode the input.-
format of input data
{ "uid": "xx", "data": { "context": [ {"role": "user", "content": "xxx"}, {"role": "assistant", "content": "xxx"}, ... {"role": "user", "content": "xxx"} ], "chosen": "chosen response", "rejected": "rejected response" } }
See func
chatml_preference
in tokenization.py for more details.
-
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file hakkero-dataloader-1.2.5.tar.gz
.
File metadata
- Download URL: hakkero-dataloader-1.2.5.tar.gz
- Upload date:
- Size: 23.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3a9faa05079a4f88fd9dcc31f7e774d161363eb396dd421029906f09ff70caa0 |
|
MD5 | 9df52fb05b55187a4183b29be371ee37 |
|
BLAKE2b-256 | 414dcb12cd11d61657014a2946d89fcfdb82aa50d96445d4c05ba8f2f8b1fc37 |