Skip to main content

Small Language Model with RoPE

Project description

TrorYong Language Model

TrorYongGPT, Small Language Model with Rotary Positional Embeddings, is a re-implementation of GPT2 of OpenAI.

TrorYong (ត្រយ៉ង) is Khmer word for giant ibis, the bird that symbolises Cambodia.

Support My Work

While this work comes truly from the heart, each project represents a significant investment of time -- from deep-dive research and code preparation to the final narrative and editing process. I am incredibly passionate about sharing this knowledge, but maintaining this level of quality is a major undertaking. If you find my work helpful and are in a position to do so, please consider supporting my work with a donation. You can click here to donate or scan the QR code below. Your generosity acts as a huge encouragement and helps ensure that I can continue creating in-depth, valuable content for you.

Using Cambodian bank account, you can donate by scanning my ABA QR code here. (or click here. Make sure that receiver's name is 'Khun Kim Ang'.)

Installation

You can easily install tror-yong-lm using pip command as the following:

pip install tror-yong-lm

Usage

Loading tokenizer

TrorYongGPT is a small language model that you can train from scratch. With this goal, you can use your own tokenizer to pair with TrorYongGPT. Just make sure that the tokenizer used for training and the tokenizer used for inference is the same.

For example, we can use a tokenizer from tiktoken of OpenAI as the following:

import tiktoken

tokenizer = tiktoken.get_encoding('gpt2')
print(tokenizer.n_vocab)

When preparing a dataset to train TrorYongGPT, you just need to transform the text into token ids using the tokenizer

sentence = 'Cambodia needs peace.'
token_ids = tokenizer.encode(sentence)

Loading TrorYongGPT model

import torch
from tror_yong_lm import TrorYongGPT, TrorYongConfig
config = TrorYongConfig(
    n_vocab=tokenizer.n_vocab, # use the tokenizer's vocab size
    n_ctx=64,
    n_layer=4,
    n_head=6,
    n_kv_head=6,
    n_state=384,
)
model = TrorYongGPT(config)
token_ids = [100, 103, 104] # suppose we have this tokens
torch_arr = torch.tensor([token_ids], dtype=torch.long) # (B, T) = (1, 3)
logits = model(torch_arr) # (B, T, n_vocab) = (1, 3, n_vocab)

Train TrorYongGPT

You can check out the notebook below to train your own Small Language Model. I would like to highlight that you can use your own tokenizer to train TrorYongGPT and I recommend to do so for Khmer language.

Open in Colab

I also have a video about training TrorYongGPT below

Watch the video

Inference

We also provide generate function to do text completion.

import tiktoken
import torch
from tror_yong_lm import TrorYongConfig, TrorYongGPT, generate

tokenizer = tiktoken.get_encoding('tokenizer/used/to/train/your/model')

config = TrorYongConfig(
    n_vocab=tokenizer.n_vocab,
    ...
)
model = TrorYongGPT(config)
best_model_params_path = "path/to/your/weights.pt"
model.load_state_dict(torch.load(best_model_params_path))

sentence = 'Once upon a time,'
# streaming
for text in generate(model, tokenizer, sentence, stream=True):
    print(text, end='', flush=True)

# or no stream
result_text = generate(model, tokenizer, sentence)
print(result_text)

TODO:

  • implement model with KV cache TrorYongGPT
  • notebook colab for training TrorYongGPT
  • benchmarking

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tror_yong_lm-0.0.5.tar.gz (12.2 kB view details)

Uploaded Source

File details

Details for the file tror_yong_lm-0.0.5.tar.gz.

File metadata

  • Download URL: tror_yong_lm-0.0.5.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for tror_yong_lm-0.0.5.tar.gz
Algorithm Hash digest
SHA256 9297a40eb0634bb812cbe3f758ae818630e9a42455aa55916bf60eab83b39ecc
MD5 f33e417b7145bf87ecebb2b057feb831
BLAKE2b-256 56f51df64d1fedb74016a0132c8f9a11c78b79775068c12e4e90b5a7c869c5a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page