Simple tools for storing inputs + labels datasets as one sample per line
Project description
Lines Dataset
Dead simple standard for storing/loading datasets as lines of text. Supports zstd compression.
pip install lines-dataset
Format
A dataset folder looks like this:
my-dataset/
meta.json
my-inputs.txt
my-compressed-labels.txt.zst
other-labels.txt.zst
...
meta.json
:
{
"lines-dataset": {
"inputs": {
"file": "my-inputs.txt",
"samples": 3000 // optionally specify the number of lines
},
"labels": {
"file": "my-compressed-labels.txt.zst",
"compression": "zstd",
"samples": 3000
},
"other-labels": {
"file": "other-labels.txt.zst",
"compression": "zstd",
"samples": 2000 // not all files need to have the same number of lines, as long as samples match line by line. The shortest file will determine the length of the dataset.
},
},
// you can add other stuff if you want to
}
Usage
import lines_dataset as lds
ds = lds.Dataset.read('path/to/my-dataset')
num_samples = ds.len('inputs', 'labels') # int | None
for x in ds.samples('inputs', 'labels'):
x['inputs'] # "the first line of inputs.txt\n"
x['labels'] # "the decompressed first line of labels.txt.zst\n"
A common convenience to use is:
import lines_dataset as lds
datasets = lds.glob('path/to/datasets/*') # list[lds.Dataset]
for x in lds.chain(datasets, 'inputs', 'labels'):
...
And that's it! Simple.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
lines_dataset-0.2.1.tar.gz
(3.5 kB
view hashes)
Built Distribution
Close
Hashes for lines_dataset-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f1efe0f0662124796e1136b73e443042e67bc6a8171e19c242a8788a50a5fb63 |
|
MD5 | 34f03ce242efa5e076f149580b4ec905 |
|
BLAKE2b-256 | cf28be6975fae7b55ce590103dfeed38678b827c87a29016618841f0cb0a9e52 |