Text classification datasets
Project description
Textbook
Universal NLU/NLI Dataset Processing Framework
It is designed with BERT
in mind and currently support seven commonsense reasoning datsets(alphanli
, hellaswag
, physicaliqa
, socialiqa
, codah
, cosmosqa
, and commonsenseqa
). It can be also applied to other datasets with few line of codes.
Architecture
Dependency
conda install av -c conda-forge
pip install -r requirements.txt
pip install --editable .
# or
pip install textbook
Download raw datasets
./fetch.sh
It downloads alphanli
, hellaswag
, physicaliqa
, socialiqa
, codah
, cosmosqa
, and commonsenseqa
from AWS in data_cache
.
In case you want to use something-something, pelase download the dataset from 20bn's website.
Usage
1. Load a dataset with parallel pandas
from transformers import BertTokenizer
from textbook import *
import modin.pandas as pd
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
d1 = MultiModalDataset(
df=pd.read_json("data_cache/alphanli/train.jsonl", lines=True),
template=lambda x: template_anli(x, LABEL2INT['anli']),
renderers=[lambda x: renderer_text(x, tokenizer)],
)
bt1 = BatchTool(tokenizer, source="anli")
i1 = DataLoader(d1, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt1.collate_fn)
2. Create a multitask dataset with multiple datasets
from transformers import BertTokenizer
from textbook import *
import modin.pandas as pd
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
# add additional tokens for each task as special `cls_token`
tokenizer.add_special_tokens({"additional_special_tokens": [
"[ANLI]", "[HELLASWAG]"
]})
d1 = MultiModalDataset(
df=pd.read_json("data_cache/alphanli/train.jsonl", lines=True),
template=lambda x: template_anli(x, LABEL2INT['anli']),
renderers=[lambda x: renderer_text(x, tokenizer, "[ANLI]")],
)
bt1 = BatchTool(tokenizer, source="anli")
i1 = DataLoader(d1, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt1.collate_fn)
d2 = MultiModalDataset(
df=pd.read_json("data_cache/hellaswag/train.jsonl", lines=True),
template=lambda x: template_hellaswag(x, LABEL2INT['hellaswag']),
renderers=[lambda x: renderer_text(x, tokenizer, "[HELLASWAG]")],
)
bt2 = BatchTool(tokenizer, source="hellaswag")
i2 = DataLoader(d2, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt2.collate_fn)
d = MultiTaskDataset([i1, i2], shuffle=False)
#! batch size must be 1 for multitaskdataset, because we already batched in each sub dataset.
for batch in DataLoader(d, batch_size=1, collate_fn=BatchTool.uncollate_fn):
pass
# {
# "source": "anli" or "hellaswag",
# "labels": ...,
# "input_ids": ...,
# "attentions": ...,
# "token_type_ids": ...,
# "images": ...,
# }
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
textbook-0.3.2.tar.gz
(9.7 kB
view details)
File details
Details for the file textbook-0.3.2.tar.gz
.
File metadata
- Download URL: textbook-0.3.2.tar.gz
- Upload date:
- Size: 9.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.37.0 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
24ba76a2276fe8280c3aa0cf16e4744beca484524ea574be528cfe8c7b57c189
|
|
MD5 |
604715132f324e720e726836c252111c
|
|
BLAKE2b-256 |
f51c63ccd191c3a6a7fd1f38582833c5888610511c12838bae13d852ae4f6060
|