No project description provided
Project description
AI21 Labs Tokenizer
A SentencePiece based tokenizer for production uses with AI21's models
Installation
pip
pip install ai21-tokenizer
poetry
poetry add ai21-tokenizer
Usage
Tokenizer Creation
Jamba Tokenizer
from ai21_tokenizer import Tokenizer, PreTrainedTokenizers
tokenizer = Tokenizer.get_tokenizer(PreTrainedTokenizers.JAMBA_INSTRUCT_TOKENIZER)
# Your code here
Another way would be to use our Jamba tokenizer directly:
from ai21_tokenizer import JambaInstructTokenizer
model_path = "<Path to your vocabs file>"
tokenizer = JambaInstructTokenizer(model_path=model_path)
# Your code here
Async usage
from ai21_tokenizer import Tokenizer, PreTrainedTokenizers
tokenizer = Tokenizer.get_async_tokenizer(PreTrainedTokenizers.JAMBA_INSTRUCT_TOKENIZER)
# Your code here
Another way would be to use our async Jamba tokenizer class method create:
from ai21_tokenizer import AsyncJambaInstructTokenizer
model_path = "<Path to your vocabs file>"
tokenizer = AsyncJambaInstructTokenizer.create(model_path=model_path)
# Your code here
J2 Tokenizer
from ai21_tokenizer import Tokenizer
tokenizer = Tokenizer.get_tokenizer()
# Your code here
Another way would be to use our Jurassic model directly:
from ai21_tokenizer import JurassicTokenizer
model_path = "<Path to your vocabs file. This is usually a binary file that end with .model>"
config = {} # "dictionary object of your config.json file"
tokenizer = JurassicTokenizer(model_path=model_path, config=config)
Async usage
from ai21_tokenizer import Tokenizer
tokenizer = Tokenizer.get_async_tokenizer()
# Your code here
Another way would be to use our async Jamba tokenizer class method create:
from ai21_tokenizer import AsyncJurassicTokenizer
model_path = "<Path to your vocabs file. This is usually a binary file that end with .model>"
config = {} # "dictionary object of your config.json file"
tokenizer = AsyncJurassicTokenizer.create(model_path=model_path, config=config)
# Your code here
Functions
Encode and Decode
These functions allow you to encode your text to a list of token ids and back to plaintext
text_to_encode = "apple orange banana"
encoded_text = tokenizer.encode(text_to_encode)
print(f"Encoded text: {encoded_text}")
decoded_text = tokenizer.decode(encoded_text)
print(f"Decoded text: {decoded_text}")
Async
# Assuming you have created an async tokenizer
text_to_encode = "apple orange banana"
encoded_text = await tokenizer.encode(text_to_encode)
print(f"Encoded text: {encoded_text}")
decoded_text = await tokenizer.decode(encoded_text)
print(f"Decoded text: {decoded_text}")
What if you had wanted to convert your tokens to ids or vice versa?
tokens = tokenizer.convert_ids_to_tokens(encoded_text)
print(f"IDs corresponds to Tokens: {tokens}")
ids = tokenizer.convert_tokens_to_ids(tokens)
Async
# Assuming you have created an async tokenizer
tokens = await tokenizer.convert_ids_to_tokens(encoded_text)
print(f"IDs corresponds to Tokens: {tokens}")
ids = tokenizer.convert_tokens_to_ids(tokens)
For more examples, please see our examples folder.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ai21_tokenizer-0.11.3.tar.gz
(2.6 MB
view hashes)
Built Distribution
Close
Hashes for ai21_tokenizer-0.11.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bbc590aee50638b4a1a5c1769538fcc7b03796246afa7ace4a3b62b3fb025faf |
|
MD5 | a7e7e0c0fae58b8d3b694fde6366b349 |
|
BLAKE2b-256 | 1e04d90756a9d58ee7e16da37ad1a810275d6dda6137cb28f3ad918d280175f2 |