Kobart model on huggingface transformers
Project description
KoBart-Transformers
- SKT에서 공개한 KoBart 모델을 편하게 사용할 수 있게 huggingface transformers로 포팅하였습니다.
Tokenizer
PretrainedTokenizerFast
를 이용하여 구현되었습니다.PretrainedTokenizerFast.from_pretrained("hyunwoongko/kobart")
와 동일합니다.
>>> from kobart_transformers import get_kobart_tokenizer
>>> kobart_tokenizer = get_kobart_tokenizer()
>>> kobart_tokenizer.tokenize("안녕하세요. 한국어 BART 입니다.🤣:)l^o")
['▁안녕하', '세요.', '▁한국어', '▁B', 'A', 'R', 'T', '▁입', '니다.', '🤣', ':)', 'l^o']
Model
BartModel
을 이용하여 구현되었습니다.BartModel.from_pretrained("hyunwoongko/kobart")와 동일합니다.
>>> from kobart import get_kobart_model, get_kobart_tokenizer
>>> kobart_tokenizer = get_kobart_tokenizer()
>>> model = get_kobart_model()
>>> inputs = kobart_tokenizer(['안녕하세요.'], return_tensors='pt')
>>> model(inputs['input_ids'])
Seq2SeqModelOutput(last_hidden_state=tensor([[[-0.4488, -4.3651, 3.2349, ..., 5.8916, 4.0497, 3.5468],
[-0.4096, -4.6106, 2.7189, ..., 6.1745, 2.9832, 3.0930]]],
grad_fn=<TransposeBackward0>), past_key_values=None, decoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, encoder_last_hidden_state=tensor([[[ 0.4624, -0.2475, 0.0902, ..., 0.1127, 0.6529, 0.2203],
[ 0.4538, -0.2948, 0.2556, ..., -0.0442, 0.6858, 0.4372]]],
grad_fn=<TransposeBackward0>), encoder_hidden_states=None, encoder_attentions=None)
Reference
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for kobart_transformers-0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d61d741042cd4bf05b72da61e4927a6f9ac6aa6884502a4f3b9ddf1ed179059 |
|
MD5 | fbba090fd7ba1ac92a1937b4915bd3db |
|
BLAKE2b-256 | d5c34dca11464046ac2ed7096f2026d83fae953da05d4cc7c53fa225cc6d5a14 |