ToDo!
Project description
CLTrier ProSem
Usage
from cltrier_prosem import Pipeline
# init pipeline object (load model, data, trainer)
pipeline = Pipeline({
'encoder': {
'model': 'deepset/gbert-base', # huggingface model slug
},
'dataset': {
'path': './path/data', # path to data directory (containing train/test.parquet)
'text_column': 'text', # column containing src text
'label_column': 'label', # column containing target label
'label_classes': ['class_1', 'class_2'], # list of target classes
},
'classifier': {
'hid_size': 512, # size of classifier perceptron
'dropout': 0.2, # dropout value
},
'pooler': {
'form': 'cls',
# type of pooling, possible values:
# 'cls', 'sent_mean', 'subword_{first|last|mean|min|max}'
# if subword probing used
'span_column': 'span'
},
'trainer': {
'num_epochs': 5, # number of training epochs
'batch_size': 32, # batch size in both training and evaluation
'learning_rate': 1e-3, # trainer learning rate
'export_path': './path/output', # output path for logging and results
},
})
# call pipeline object (training and evaluation)
pipeline()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
cltrier_prosem-0.2.1.tar.gz
(3.0 MB
view hashes)
Built Distribution
Close
Hashes for cltrier_prosem-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6f2197727947e4e30f9a62c90f08e1a08c6ef533823bf5e05b2430914fb740d7 |
|
MD5 | 2d4fc5c9e3099fab5c49839765dfcb4b |
|
BLAKE2b-256 | 99a4a65a69bd6c2611b0300d703d1a85ff58e9a68cf7759e1f4c66c60a9c6ee7 |