Skip to main content

Small Language Model with RoPE

Project description

TrorYong Language Model

TrorYongGPT, Small Language Model with Rotary Positional Embeddings, is a re-implementation of GPT2 by OpenAI.

Installation

You can easily install tror-yong-lm using pip command as the following:

pip install tror-yong-lm

Usage

Loading tokenizer

TrorYongGPT is a small language model that you can train from scratch. With this goal, you can use your own tokenizer to pair with TrorYongGPT. Just make sure that the tokenizer used for training and the tokenizer used for inference is the same.

For example, we can use a tokenizer from tiktoken of OpenAI as the following:

import tiktoken

tokenizer = tiktoken.get_encoding('gpt2')
print(tokenizer.n_vocab)

When preparing a dataset to train TrorYongGPT, you just need to transform the text into token ids using the tokenizer

sentence = 'Cambodia needs peace.'
token_ids = tokenizer.encode(sentence)

Loading TrorYongGPT model

import torch
from tror_yong_lm import TrorYongGPT, TrorYongConfig
config = TrorYongConfig(
    n_vocab=tokenizer.n_vocab, # use the tokenizer's vocab size
    n_ctx=64,
    n_layer=4,
    n_head=6,
    n_kv_head=6,
    n_state=384,
)
model = TrorYongGPT(config)
token_ids = [100, 103, 104] # suppose we have this tokens
torch_arr = torch.tensor([token_ids], dtype=torch.long) # (B, T) = (1, 3)
logits = model(torch_arr) # (B, T, n_vocab) = (1, 3, n_vocab)

Train TrorYongGPT

(To be done)

Inference

We also provide generate function to do text completion.

import tiktoken
import torch
from tror_yong_lm import TrorYongConfig, TrorYongGPT, generate

tokenizer = tiktoken.get_encoding('tokenizer/used/to/train/your/model')

config = TrorYongConfig(
    n_vocab=tokenizer.n_vocab,
    ...
)
model = TrorYongGPT(config)
best_model_params_path = "path/to/your/weights.pt"
model.load_state_dict(torch.load(best_model_params_path))

sentence = 'Once upon a time,'
# streaming
for text in generate(model, tokenizer, sentence, stream=True):
    print(text, end='', flush=True)

# or no stream
result_text = generate(model, tokenizer, sentence)
print(result_text)

TODO:

  • implement model with KV cache TrorYongGPT
  • notebook colab for training TrorYongGPT
  • benchmarking

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tror_yong_lm-0.0.1.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tror_yong_lm-0.0.1-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file tror_yong_lm-0.0.1.tar.gz.

File metadata

  • Download URL: tror_yong_lm-0.0.1.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for tror_yong_lm-0.0.1.tar.gz
Algorithm Hash digest
SHA256 6e8680b6a79f0ea52854f8e106500c17d463d625d11c180b5cb968e155970b3a
MD5 e54385a2e12808b642d1d6d0119ad080
BLAKE2b-256 2d8c937b8d029b157380bc2b4d5ced38ba4f6a1ba4c769d0591c0335824d04f3

See more details on using hashes here.

File details

Details for the file tror_yong_lm-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: tror_yong_lm-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 10.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for tror_yong_lm-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 62810ae30c436247dd1a8d466e87f38160f0547e58b974d6e96ef361d58145a3
MD5 99b727d9b3b28df1b396f80fe422f178
BLAKE2b-256 403fe7467107c191e41e6b5ae4a32186c38415d241cdd95920a8a35c746fdfd8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page