Simple tools for storing inputs + labels datasets as one sample per line
Project description
Lines Dataset
Dead simple standard for storing/loading datasets as lines of text. Supports zstd compression.
pip install lines-dataset
Format
A dataset folder looks like this:
my-dataset/
meta.json
my-inputs.txt
my-compressed-labels.txt.zst
other-labels.txt.zst
...
meta.json
:
{
"lines-dataset": {
"inputs": {
"file": "my-inputs.txt",
"samples": 3000 // optionally specify the number of lines
},
"labels": {
"file": "my-compressed-labels.txt.zst",
"compression": "zstd",
"samples": 3000
},
"other-labels": {
"file": "other-labels.txt.zst",
"compression": "zstd",
"samples": 2000 // not all files need to have the same number of lines, as long as samples match line by line. The shortest file will determine the length of the dataset.
},
},
// you can add other stuff if you want to
}
Usage
import lines_dataset as lds
ds = lds.Dataset.read('path/to/my-dataset')
num_samples = ds.len('inputs', 'labels') # int | None
for x in ds.samples('inputs', 'labels'):
x['inputs'] # "the first line of inputs.txt\n"
x['labels'] # "the decompressed first line of labels.txt.zst\n"
A common convenience to use is:
import lines_dataset as lds
datasets = lds.glob('path/to/datasets/*') # list[lds.Dataset]
for x in lds.chain(datasets, 'inputs', 'labels'):
...
And that's it! Simple.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
lines_dataset-0.2.4.tar.gz
(3.6 kB
view hashes)
Built Distribution
Close
Hashes for lines_dataset-0.2.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ccfde2a6b9907ac7acc032c7347b38929d3ad349a55d74081e7dd8e5b97ef6e9 |
|
MD5 | 7e55e33e5f7f50adaa3269d2397fcede |
|
BLAKE2b-256 | 423cc7965c45a3960b9050bee973a67052cb07eae5bf139f2dc0e204e2c0dc5c |