The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pretrained 9B parameter model.
Project description
GeoV
Overview
The GeoV model was designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER) by Georges Hark and Varuna Jayasiri.
RoPER, in addition to using relative positions in the attention score calculation by RoPE embeddings, adds relative positional information explicitly to value embeddings. Specifically, it incorporates the relative positions of the tokens paid attention to. RoPER has given better performance in some algorithmic tasks, and seems comparable to RoPE in language modeling.
The GeoV tokenizer uses SentencePiece unigram language model and tokenizes symbols, digits and new line characters separately, in order to achieve better performance on mathematical content and code.
This model was contributed by gharik and vpj.
We have shared 9B parameter pre-trained model at GeoV/GeoV-9b, We plan to release checkpoints around every 20b tokens trained from here until around 300b tokens. We will also train smaller and larger versions. Our aim is to have broadly available smaller and larger models.
This implementation is built on top of transformers library.
Installations
pip install geov
Generation
from geov import GeoVForCausalLM, GeoVTokenizer
model = GeoVForCausalLM.from_pretrained("GeoV/GeoV-9b")
tokenizer = GeoVTokenizer.from_pretrained("GeoV/GeoV-9b")
prompt = "In mathematics, topology is the study of"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.