Skip to main content

Model hub for transformers.

Project description

Usage Sample ''''''''''''

.. code:: python

    from sklearn.model_selection import train_test_split
    import torch
    from transformers import BertTokenizer
    from nlpx.dataset import TextDataset, text_collate
    from nlpx.model.wrapper import ClassifyModelWrapper
    from transformers_model import AutoCNNTextClassifier, AutoCNNTokenClassifier, \
            BertDataset, BertCollator, BertTokenizeCollator

    texts = [[str],]
    labels = [0, 0, 1, 2, 1...]
    pretrained_path = "clue/albert_chinese_tiny"
    classes = ['class1', 'class2', 'class3'...]
    train_texts, test_texts, y_train, y_test = train_test_split(texts, labels, test_size=0.2)
    
    train_set = TextDataset(train_texts, y_train)
    test_set = TextDataset(test_texts, y_test)

    ################################### TextClassifier ##################################
    model = AutoCNNTextClassifier(pretrained_path, len(classes))
    wrapper = ClassifyModelWrapper(model, classes)
    _ = wrapper.train(train_set, test_set, collate_fn=text_collate)

    ################################### TokenClassifier #################################
    tokenizer = BertTokenizer.from_pretrained(pretrained_path)

    ##################### BertTokenizeCollator #########################
    model = AutoCNNTokenClassifier(pretrained_path, len(classes))
    wrapper = ClassifyModelWrapper(model, classes)
    _ = wrapper.train(train_set, test_set, collate_fn=BertTokenizeCollator(tokenizer, 256))

    ##################### BertCollator ##################################
    train_tokenizies = tokenizer.batch_encode_plus(
            train_texts,
            max_length=256,
            padding="max_length",
            truncation=True,
            return_token_type_ids=True,
            return_attention_mask=True,
            return_tensors="pt",
    )

    test_tokenizies = tokenizer.batch_encode_plus(
            test_texts,
            max_length=256,
            padding="max_length",
            truncation=True,
            return_token_type_ids=True,
            return_attention_mask=True,
            return_tensors="pt",
    )

    train_set = BertDataset(train_tokenizies, y_train)
    test_set = BertDataset(test_tokenizies, y_test)

    model = AutoCNNTokenClassifier(pretrained_path, len(classes))
    wrapper = ClassifyModelWrapper(model, classes)
    _ = wrapper.train(train_set, test_set, collate_fn=BertCollator())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

transformers-model-0.1.0.tar.gz (7.0 kB view details)

Uploaded Source

File details

Details for the file transformers-model-0.1.0.tar.gz.

File metadata

  • Download URL: transformers-model-0.1.0.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for transformers-model-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9ec8d1db72408cb74f6a41177af7c3013fdd3997c56036d541e56f544ab0f9cf
MD5 48aaca762cb73cc3f334ab4fe58216d9
BLAKE2b-256 69e52f305ebf689ea8515f9ed5ec47d7007bd33ac1cef1cb05638ba0150934cd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page