Python3 library for converting between various LLM dataset formats.
Project description
The llm-dataset-converter allows the conversion between various dataset formats for large language models (LLMs). Filters can be supplied as well, e.g., for cleaning up the data.
Dataset formats:
pairs: alpaca (r/w), csv (r/w), jsonl (r/w), parquet (r/w), tsv (r/w)
pretrain: csv (r/w), jsonl (r/w), parquet (r/w), tsv (r/w), txt (r/w)
translation: csv (r/w), jsonl (r/w), parquet (r/w), tsv (r/w), txt (r/w)
Compression formats:
bzip
gzip
xz
zstd
Examples:
Simple conversion with logging info:
llm-convert \ from-alpaca \ -l INFO \ --input ./alpaca_data_cleaned.json \ to-csv-pr \ -l INFO \ --output alpaca_data_cleaned.csv
Automatic decompression/compression (based on file extension):
llm-convert \ from-alpaca \ --input ./alpaca_data_cleaned.json.xz \ to-csv-pr \ --output alpaca_data_cleaned.csv.gz
Filtering:
llm-convert \ -l INFO \ from-alpaca \ -l INFO \ --input alpaca_data_cleaned.json \ keyword \ -l INFO \ --keyword function \ --location any \ --action keep \ to-alpaca \ -l INFO \ --output alpaca_data_cleaned-filtered.json
Changelog
0.1.0 (2024-02-05)
fixed output format of to-llama2-format filter
llama2-to-pairs filter has more robust parsing now
upgraded seppl to 0.1.0
switched to seppl classes: Splitter, MetaDataHandler, Reader, Writer, StreamWriter, BatchWriter
0.0.5 (2024-01-24)
added flag -b/–force_batch to the llm-convert tool which all data to be reader from the reader before filtering it and then passing it to the writer; useful for batch filters.
added the randomize-records batch filter
added the –encoding ENC option to file readers
auto-determined encoding is now being logged (INFO level)
the LDC_ENCODING_MAX_CHECK_LENGTH environment variable allows overriding the default number of bytes used for determining the file encoding in auto-detect mode
default max number of bytes inspected for determining file encoding is now 10kb
method locate_files in base_io no longer includes directories when expanding globs
added tool llm-file-encoding for determining file encodings of text files
added method replace_extension to base_io module for changing a files extension (removes any supported compression suffix first)
stream writers (.jsonl/.txt) now work with –force_batch mode; the output file name gets automatically generated from the input file name when just using a directory for the output
0.0.4 (2023-12-19)
pairs-to-llama2 filter now has an optional –prefix parameter to use with the instruction
added the pretrain-sentences-to-pairs filter for generating artificial prompt/response datasets from pretrain data
requires seppl>=0.0.11 now
the LDC_MODULES_EXCL environment variable is now used for specifying modules to be excluded from the registration process (e.g., used when generating help screens for derived libraries that shouldn’t output the base plugins as well)
llm-registry and llm-help now allow specifying excluded modules via -e/–excluded_modules option
to-alpaca writer now has the -a/–ensure_ascii flag to enforce ASCII compatibility in the output
added global option -u/–update_interval to convert tool to customize how often progress of # records processed is being output in the console (default: 1000)
text-length filter now handles None values, i.e., ignores them
locations (i.e., input/instructions/output/etc) can be specified now multiple times
the llm-help tool can generate index files for all the plugins now; in case of markdown it will link to the other markdown files
0.0.3 (2023-11-10)
added the record-window filter
added the llm-registry tool for querying the registry from the command-line
added the replace_patterns method to ldc.text_utils module
added the replace-patterns filter
added -p/–pretty-print flag to to-alpaca writer
added pairs-to-llama2 and llama2-to-pairs filter (since llama2 has instruction as part of the string, it is treated as pretrain data)
added to-llama2-format filter for pretrain records (no [INST]…[/INST] block)
now requiring seppl>=0.0.8 in order to raise Exceptions when encountering unknown arguments
0.0.2 (2023-10-31)
added text-stats filter
stream writers accept iterable of data records now as well to improve throughput
text_utils.apply_max_length now uses simple whitespace splitting instead of searching for nearest word boundary to break a line, which results in a massive speed improvement
fix: text_utils.remove_patterns no longer multiplies the generated lines when using more than one pattern
added remove-patterns filter
pretrain and translation text writers now buffer records by default (-b, –buffer_size) in order to improve throughput
jsonlines writers for pair, pretrain and translation data are now stream writers
0.0.1 (2023-10-26)
initial release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file llm-dataset-converter-0.1.0.tar.gz
.
File metadata
- Download URL: llm-dataset-converter-0.1.0.tar.gz
- Upload date:
- Size: 72.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 408d6a26ba0a569a88a4d256d46e301f69e53a84e0ac7114a4f9eb111bddd52c |
|
MD5 | 609a31afc06cbcd62747eb4c8ea6ea14 |
|
BLAKE2b-256 | 0bd9554c8ccb84c830840879e32c6674a9d1cb4f775c35802fb5a2d3324380d5 |