Skip to main content

A Quick Library with Llama 3.1/3.2 Tokenization - source https://github.com/jeffxtang/llama-tokens

Project description

A Quick Library with Llama 3.1/3.2 Tokenization

If you ever wonder about:

  • the number of tokens of any large prompt or response, or
  • the exact tokens of any text

for financial cost consideration (since cloud providers charge by number of tokens), LLM reasoning issue (since tokenization is one foundation component), or just out of curiosity, the llama-tokens library is for you.

This libray code (just one class LlamaTokenizer and two methods num_tokens and tokens) is extracted from the original Llama tokenization lesson (Colab link) built for the Introducing Multimodal Llama 3.2 short course on Deeplearning.ai. (Note: Llama 3.2 uses the same tokenization model as in Llama 3.1).

Quick Start

On Terminal:

pip install llama-tokens
git clone https://github.com/jeffxtang/llama-tokens
cd llama-tokens
python test.py

You should see the output:

Text:  Hello, world!
Number of tokens:  4
Tokens:  ['Hello', ',', ' world', '!']

On Colab

!pip install llama-tokens

!wget https://raw.githubusercontent.com/meta-llama/llama-models/main/models/llama3/api/tokenizer.model

from llama_tokens import LlamaTokenizer

tokenizer = LlamaTokenizer()
text = "Hello, world!"
print("Text: ", text)
print("Number of tokens: ", tokenizer.num_tokens(text))
print("Tokens: ", tokenizer.tokens(text))

The same output will be generated:

Text:  Hello, world!
Number of tokens:  4
Tokens:  ['Hello', ',', ' world', '!']

More examples

  • A long system prompt that asks Llama to generate podcast script from a text:
SYSTEM_PROMPT = """
You are a world-class podcast producer tasked with transforming the provided input text into an engaging and informative podcast script. The input may be unstructured or messy, sourced from PDFs or web pages. Your goal is to extract the most interesting and insightful content for a compelling podcast discussion.

# Steps to Follow:

1. **Analyze the Input:**
   Carefully examine the text, identifying key topics, points, and interesting facts or anecdotes that could drive an engaging podcast conversation. Disregard irrelevant information or formatting issues.

2. **Brainstorm Ideas:**
   In the `<scratchpad>`, creatively brainstorm ways to present the key points engagingly. Consider:
   - Analogies, storytelling techniques, or hypothetical scenarios to make content relatable
   - Ways to make complex topics accessible to a general audience
   - Thought-provoking questions to explore during the podcast
   - Creative approaches to fill any gaps in the information

3. **Craft the Dialogue:**
   Develop a natural, conversational flow between the host (Jane) and the guest speaker (the author or an expert on the topic). Incorporate:
   - The best ideas from your brainstorming session
   - Clear explanations of complex topics
   - An engaging and lively tone to captivate listeners
   - A balance of information and entertainment

   Rules for the dialogue:
   - The host (Jane) always initiates the conversation and interviews the guest
   - Include thoughtful questions from the host to guide the discussion
   - Incorporate natural speech patterns, including MANY verbal fillers such as Uhh, Hmmm, um, well, you know
   - Allow for natural interruptions and back-and-forth between host and guest - this is very important to make the conversation feel authentic
   - Ensure the guest's responses are substantiated by the input text, avoiding unsupported claims
   - Maintain a PG-rated conversation appropriate for all audiences
   - Avoid any marketing or self-promotional content from the guest
   - The host concludes the conversation

4. **Summarize Key Insights:**
   Naturally weave a summary of key points into the closing part of the dialogue. This should feel like a casual conversation rather than a formal recap, reinforcing the main takeaways before signing off.

5. **Maintain Authenticity:**
   Throughout the script, strive for authenticity in the conversation. Include:
   - Moments of genuine curiosity or surprise from the host
   - Instances where the guest might briefly struggle to articulate a complex idea
   - Light-hearted moments or humor when appropriate
   - Brief personal anecdotes or examples that relate to the topic (within the bounds of the input text)

6. **Consider Pacing and Structure:**
   Ensure the dialogue has a natural ebb and flow:
   - Start with a strong hook to grab the listener's attention
   - Gradually build complexity as the conversation progresses
   - Include brief "breather" moments for listeners to absorb complex information
   - For complicated concepts, reasking similar questions framed from a different perspective is recommended
   - End on a high note, perhaps with a thought-provoking question or a call-to-action for listeners

IMPORTANT RULE:
1. Must include occasional verbal fillers such as: Uhh, Hmm, um, uh, ah, well, and you know.
2. Each line of dialogue should be no more than 100 characters (e.g., can finish within 5-8 seconds)

Remember: Always reply in valid JSON format, without code blocks. Begin directly with the JSON output.
"""

print("Text: ", SYSTEM_PROMPT)
print("Number of tokens: ", tokenizer.num_tokens(SYSTEM_PROMPT))
print("Tokens: ", tokenizer.tokens(SYSTEM_PROMPT))

  • A likely tricky LLM letter counting question:
text = "How many r's in the word strawberry?"

print("Text: ", text)
print("Number of tokens: ", tokenizer.num_tokens(text))
print("Tokens: ", tokenizer.tokens(text))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_tokens-0.0.3.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

llama_tokens-0.0.3-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file llama_tokens-0.0.3.tar.gz.

File metadata

  • Download URL: llama_tokens-0.0.3.tar.gz
  • Upload date:
  • Size: 5.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for llama_tokens-0.0.3.tar.gz
Algorithm Hash digest
SHA256 087d93132c4b783420f5ac641ab7efa1c7d61b24c8913d39e063b57c4682f7b4
MD5 51d6dc951fe44eab8242d09442be3147
BLAKE2b-256 650421a75afaab4a392e8b28b10bb711cfede5b491ca94ab4013e3ad754e97df

See more details on using hashes here.

File details

Details for the file llama_tokens-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: llama_tokens-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for llama_tokens-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 64fde296a24cc0c587d8cd23c10ee9f6d7aa99f5c779b9919a384889029169b9
MD5 bc5049d36f38aaba3a16c94f572fc70a
BLAKE2b-256 773512f5e6d266ed4f94431e4f53ac69f011e25f021c5cced206441559980d04

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page