Multi-Launguage RoBERTa trained by RIKEN-AIP LIAT.
Project description
liat_ml_roberta
RoBERTa trained on Wikipedia Dump.
How to install
Can use pip to install.
pip install liat_ml_roberta
How to use
The loaded models and configurations can be used in the same way as transformers.roberta.
from liat_ml_roberta import RoBERTaTokenizer
def main():
tokenizer = RoBERTaTokenizer.from_pretrained(version="en_20190121_m10000_v24000_base")
print(tokenizer.tokenize("This is a pen."))
config = RoBERTaConfig.from_pretrained("roberta_base_en_20190121_m10000_v24000_u125000")
model = RoBERTaModel.from_pretrained("roberta_base_en_20190121_m10000_v24000_u125000", config=config)
if __name__ == "__main__":
main()
Models
name | lang | size | bpe merges | vocab size | updates | wikipedia version |
---|---|---|---|---|---|---|
roberta_base_ja_20190121_m10000_v24000_u125000 | ja | roberta-base | 10000 | 24000 | 125000 | 20190121 |
roberta_base_ja_20190121_m10000_v24000_u500000 | ja | roberta-base | 10000 | 24000 | 500000 | 20190121 |
roberta_base_en_20190121_m10000_v24000_u125000 | en | roberta-base | 10000 | 24000 | 125000 | 20190121 |
roberta_base_en_20190121_m10000_v24000_u500000 | en | roberta-base | 10000 | 24000 | 500000 | 20190121 |
roberta_base_fr_20190121_m10000_v24000_u500000 | fr | roberta-base | 10000 | 24000 | 500000 | 20190121 |
roberta_base_de_20190121_m10000_v24000_u500000 | de | roberta-base | 10000 | 24000 | 500000 | 20190121 |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
liat_ml_roberta-1.1.5.tar.gz
(1.3 MB
view hashes)
Built Distribution
Close
Hashes for liat_ml_roberta-1.1.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9dc5a63e211fc06f9c48b764ce2a476a2e4c79d84f14c718d082316a5962bc91 |
|
MD5 | afe873eeab69fc2f2509ac0279f40daf |
|
BLAKE2b-256 | 086e4b3c89498279afb6d66b8db6a49691e450534792731e8db6f5c849b210b8 |