Microlib for sampling from an LLM
Project description
LLM Sampler
Quick example
import torch
from llm_sampler import sample
# Initializes the forward_func.
# This could be any function that returns logits when given input tokens
# For example, Hugggingface Models, LLaMa, Falcon, etc.
forward_func = load_model()
input_ids = tokenize_input("Magnus Carlsen had won the World ") # Tokenize the input
max_new_tokens = 10 # Number of new tokens to generate
generated_tokens = sample(
forward_func=forward_func,
input_ids=input_ids,
max_new_tokens=max_new_tokens,
temperature=0.6,
warp_top_k=10
)
for next_token in generated_tokens:
print("Next token:", next_token)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_sampler-0.1.0.tar.gz
(3.4 kB
view hashes)
Built Distribution
Close
Hashes for llm_sampler-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc20ec2a5e38a0fa42f5a1e7296952814ecb1ae4b57443645f3be02ddfefec1b |
|
MD5 | 755c4b19437a19adc56b74afee74d3ab |
|
BLAKE2b-256 | a2062850b325cd78bfaf1d4b591acb17ade45d5ef469466c9f2f1e441b05a315 |