Small Language Model with RoPE
Project description
TrorYong Language Model
TrorYongGPT, Small Language Model with Rotary Positional Embeddings, is a re-implementation of GPT2 of OpenAI.
TrorYong (ត្រយ៉ង) is Khmer word for giant ibis, the bird that symbolises Cambodia.
Support My Work
While this work comes truly from the heart, each project represents a significant investment of time -- from deep-dive research and code preparation to the final narrative and editing process. I am incredibly passionate about sharing this knowledge, but maintaining this level of quality is a major undertaking. If you find my work helpful and are in a position to do so, please consider supporting my work with a donation. You can click here to donate or scan the QR code below. Your generosity acts as a huge encouragement and helps ensure that I can continue creating in-depth, valuable content for you.
Installation
You can easily install tror-yong-lm using pip command as the following:
pip install tror-yong-lm
Usage
Loading tokenizer
TrorYongGPT is a small language model that you can train from scratch.
With this goal, you can use your own tokenizer to pair with TrorYongGPT.
Just make sure that the tokenizer used for training and the tokenizer used for inference is the same.
For example, we can use a tokenizer from tiktoken of OpenAI as the following:
import tiktoken
tokenizer = tiktoken.get_encoding('gpt2')
print(tokenizer.n_vocab)
When preparing a dataset to train TrorYongGPT, you just need to transform the text into token ids using the tokenizer
sentence = 'Cambodia needs peace.'
token_ids = tokenizer.encode(sentence)
Loading TrorYongGPT model
import torch
from tror_yong_lm import TrorYongGPT, TrorYongConfig
config = TrorYongConfig(
n_vocab=tokenizer.n_vocab, # use the tokenizer's vocab size
n_ctx=64,
n_layer=4,
n_head=6,
n_kv_head=6,
n_state=384,
)
model = TrorYongGPT(config)
token_ids = [100, 103, 104] # suppose we have this tokens
torch_arr = torch.tensor([token_ids], dtype=torch.long) # (B, T) = (1, 3)
logits = model(torch_arr) # (B, T, n_vocab) = (1, 3, n_vocab)
Train TrorYongGPT
You can check out the notebook below to train your own Small Language Model.
I would like to highlight that you can use your own tokenizer to train TrorYongGPT and I recommend to do so for Khmer language.
I also have a video about training TrorYongGPT below
Inference
We also provide generate function to do text completion.
import tiktoken
import torch
from tror_yong_lm import TrorYongConfig, TrorYongGPT, generate
tokenizer = tiktoken.get_encoding('tokenizer/used/to/train/your/model')
config = TrorYongConfig(
n_vocab=tokenizer.n_vocab,
...
)
model = TrorYongGPT(config)
best_model_params_path = "path/to/your/weights.pt"
model.load_state_dict(torch.load(best_model_params_path))
sentence = 'Once upon a time,'
# streaming
for text in generate(model, tokenizer, sentence, stream=True):
print(text, end='', flush=True)
# or no stream
result_text = generate(model, tokenizer, sentence)
print(result_text)
TODO:
- implement model with KV cache
TrorYongGPT - notebook colab for training
TrorYongGPT - benchmarking
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file tror_yong_lm-0.0.3.tar.gz.
File metadata
- Download URL: tror_yong_lm-0.0.3.tar.gz
- Upload date:
- Size: 12.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
977a48101a9cea97437c7ca3f303dd9f4769f9a1c819d119c81194ec5a8ab843
|
|
| MD5 |
e255081c8630f09c20e22f012ee2cc34
|
|
| BLAKE2b-256 |
7391193f8b5e7bc9fe7cdfde5e78e681cf2c8940ab65864176b64313d74930cf
|