Simple tools for storing inputs + labels datasets as one sample per line
Project description
Lines Dataset
Dead simple standard for storing/loading datasets as lines of text. Supports zstd compression.
pip install lines-dataset
Format
A dataset folder looks like this:
my-dataset/
meta.json
my-inputs.txt
my-compressed-labels.txt.zst
other-labels.txt.zst
...
meta.json
:
{
"lines-dataset": {
"inputs": {
"file": "my-inputs.txt",
"samples": 3000 // optionally specify the number of lines
},
"labels": {
"file": "my-compressed-labels.txt.zst",
"compression": "zstd",
"samples": 3000
},
"other-labels": {
"file": "other-labels.txt.zst",
"compression": "zstd",
"samples": 2000 // not all files need to have the same number of lines, as long as samples match line by line. The shortest file will determine the length of the dataset.
},
},
// you can add other stuff if you want to
}
Usage
import lines_dataset as lds
ds = lds.Dataset.read('path/to/my-dataset')
num_samples = ds.len('inputs', 'labels') # int | None
for x in ds.samples('inputs', 'labels'):
x['inputs'] # "the first line of inputs.txt\n"
x['labels'] # "the decompressed first line of labels.txt.zst\n"
A common convenience to use is:
import lines_dataset as lds
datasets = lds.glob('path/to/datasets/*') # list[lds.Dataset]
for x in lds.chain(datasets, 'inputs', 'labels'):
...
And that's it! Simple.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
lines_dataset-0.2.3.tar.gz
(3.6 kB
view hashes)
Built Distribution
Close
Hashes for lines_dataset-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 093bd46f0022c8e3c053f3f642ad337a95c0f87c73b14a99ccaa93a25feb73d0 |
|
MD5 | dbd341927f70fc1174931d334b1165ed |
|
BLAKE2b-256 | d3cbd4fc783c4cee571128d7ae6b6610deba191bb299fd61cbc32c77379fee10 |