Skip to main content

Deliver the ready-to-train data to your NLP model.

Project description

chariot

PyPI version Build Status codecov

Deliver the ready-to-train data to your NLP model.

  • Prepare Dataset
    • You can prepare typical NLP datasets through the chazutsu.
  • Build & Run Preprocess
    • You can build the preprocess pipeline like scikit-learn Pipeline.
    • Preprocesses for each dataset column are executed in parallel by Joblib.
    • Multi-language text tokenization is supported by spaCy.
  • Format Batch
    • Sampling a batch from preprocessed dataset and format it to train the model (padding etc).
    • You can use pre-trained word vectors through the chakin.

chariot enables you to concentrate on training your model!

chariot flow

Install

pip install chariot

Prepare dataset

You can download various dataset by using chazutsu.

import chazutsu
from chariot.storage import Storage


storage = Storage("your/data/root")
r = chazutsu.datasets.MovieReview.polarity().download(storage.path("raw"))

df = storage.chazutsu(r.root).data()
df.head(5)

Then

	polarity	review
0	0	synopsis : an aging master art thief , his sup...
1	0	plot : a separated , glamorous , hollywood cou...
2	0	a friend invites you to a movie . this film wo...

Storage class manage the directory structure that follows cookie-cutter datascience.

Project root
  └── data
       ├── external     <- Data from third party sources (ex. word vectors).
       ├── interim      <- Intermediate data that has been transformed.
       ├── processed    <- The final, canonical datasets for modeling.
       └── raw          <- The original, immutable data dump.

Build & Run Preprocess

Build a preprocess pipeline

All preprocessors are defined at chariot.transformer.
Transformers are implemented by extending scikit-learn Transformer.
Because of this, the API of Transformer is familiar to you. And you can mix scikit-learn's preprocessors.

import chariot.transformer as ct
from chariot.preprocessor import Preprocessor


preprocessor = Preprocessor()
preprocessor\
    .stack(ct.text.UnicodeNormalizer())\
    .stack(ct.Tokenizer("en"))\
    .stack(ct.token.StopwordFilter("en"))\
    .stack(ct.Vocabulary(min_df=5, max_df=0.5))\
    .fit(train_data)

preprocessor.save("my_preprocessor.pkl")

loaded = Preprocessor.load("my_preprocessor.pkl")

There is 6 type of transformers are prepared in chariot.

  • TextPreprocessor
    • Preprocess the text before tokenization.
    • TextNormalizer: Normalize text (replace some character etc).
    • TextFilter: Filter the text (delete some span in text stc).
  • Tokenizer
    • Tokenize the texts.
    • It powered by spaCy and you can choose MeCab or Janome for Japanese.
  • TokenPreprocessor
    • Normalize/Filter the tokens after tokenization.
    • TokenNormalizer: Normalize tokens (to lower, to original form etc).
    • TokenFilter: Filter tokens (extract only noun etc).
  • Vocabulary
    • Make vocabulary and convert tokens to indices.
  • Formatter
    • Format (preprocessed) data for training your model.
  • Generator
    • Genrate target data to train your (language) model.

Build a preprocess for dataset

When you want to make preprocess to each of your dataset column, you can use DatasetPreprocessor.

from chariot.dataset_preprocessor import DatasetPreprocessor
from chariot.transformer.formatter import Padding


dp = DatasetPreprocessor()
dp.process("review")\
    .by(ct.text.UnicodeNormalizer())\
    .by(ct.Tokenizer("en"))\
    .by(ct.token.StopwordFilter("en"))\
    .by(ct.Vocabulary(min_df=5, max_df=0.5))\
    .by(Padding(length=pad_length))\
    .fit(train_data["review"])
dp.process("polarity")\
    .by(ct.formatter.CategoricalLabel(num_class=3))


preprocessed = dp.preprocess(data)

# DatasetPreprocessor has multiple preprocessor.
# Because of this, save file format is `tar.gz`.
dp.save("my_dataset_preprocessor.tar.gz")

loaded = DatasetPreprocessor.load("my_dataset_preprocessor.tar.gz")

Train your model with chariot

chariot has feature to traing your model.

formatted = dp(train_data).preprocess().format().processed

model.fit(formatted["review"], formatted["polarity"], batch_size=32,
          validation_split=0.2, epochs=15, verbose=2)
for batch in dp(train_data.preprocess().iterate(batch_size=32, epoch=10):
    model.train_on_batch(batch["review"], batch["polarity"])

You can use pre-trained word vectors by chakin.

from chariot.storage import Storage
from chariot.transformer.vocabulary import Vocabulary

# Download word vector
storage = Storage("your/data/root")
storage.chakin(name="GloVe.6B.50d")

# Make embedding matrix
vocab = Vocabulary()
vocab.set(["you", "loaded", "word", "vector", "now"])
embed = vocab.make_embedding(storage.path("external/glove.6B.50d.txt"))
print(embed.shape)  # (len(vocab.count), 50)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chariot-0.5.6.tar.gz (3.6 MB view details)

Uploaded Source

File details

Details for the file chariot-0.5.6.tar.gz.

File metadata

  • Download URL: chariot-0.5.6.tar.gz
  • Upload date:
  • Size: 3.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.41.0 CPython/3.7.1

File hashes

Hashes for chariot-0.5.6.tar.gz
Algorithm Hash digest
SHA256 11e1571a484c90c9644d1dcd43090d1420bfde740120504810f1098a20da2188
MD5 ace2651668692fc3bbcfffba84f63cf6
BLAKE2b-256 7466d2978d927b979473771f94595a9e38d866716120f031fba2f47167e1e0fe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page