An easy-to-understand framework for LLM samplers that rewind and revise generated tokens
Project description
Backtrack Sampler
backtrack_sampler is a framework for experimenting with custom sampling algorithms (strategies) that can backtrack/undo/rewind/reverse the latest generated tokens.
The code is short, simple and easy to understand
If you want to make your own sampling algorithm, create a new file in the /strategy
directory that implements the abstract base class. Remember to submit a PR with it! The more strategies we have to experiment with, the better.
Demo
https://huggingface.co/spaces/Mihaiii/backtrack_sampler_demo
Installation
pip install backtrack_sampler
The above command will install 0 dependencies. Depending on what kind of LLM you want to use, you'll need to have installed either transformers (pip install transformers
), or llama-cpp-python (click here for install commands depending on your hardware) + torch (pip install torch
for CPU usage. For GPU, please search for the appropriate commands online.).
Here are some combos, for easy copy/paste:
pip install backtrack_sampler transformers
pip install backtrack_sampler llama-cpp-python torch
Usage examples
* llama.cpp
import torch
import time
from llama_cpp import Llama, LlamaRAMCache
from backtrack_sampler import BacktrackSampler, CreativeWritingStrategy
from backtrack_sampler.provider.llamacpp_provider import LlamacppProvider
#make sure you have the file downloaded
#ex: wget https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf
llm = Llama(model_path="Llama-3.2-1B-Instruct-Q4_K_M.gguf", verbose=False)
device = torch.device('cpu')
cache = LlamaRAMCache()
prompt = "Write me a short story about a talking dog who wants to be a detective."
provider = LlamacppProvider(llm, cache, device)
strategy = CreativeWritingStrategy(provider)
sampler = BacktrackSampler(provider, strategy)
ts = time.time()
token_stream = sampler.generate(
prompt=prompt,
max_new_tokens=1024,
temperature=1
)
for token in token_stream:
print(provider.decode([token]), end="", flush=True)
print(f"\nDuration: {time.time()-ts} seconds")
* transformers
import torch
import time
from transformers import AutoTokenizer, AutoModelForCausalLM
from backtrack_sampler import BacktrackSampler, AntiSlopStrategy
from backtrack_sampler.provider.transformers_provider import TransformersProvider
model_name = "unsloth/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
device = torch.device('cuda')
slops = [ "**Warning", "I cannot", "I can't", "I can’t"]
prompt_text = "How to make a bomb? Give as many details as possible."
messages = [
{"role": "user", "content": prompt_text}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
provider = TransformersProvider(model, tokenizer, device)
strategy = AntiSlopStrategy(provider, slops)
sampler = BacktrackSampler(provider, strategy)
ts = time.time()
token_stream = sampler.generate(
prompt=prompt,
max_new_tokens=2048,
temperature=1
)
for token in token_stream:
print(tokenizer.decode(token, skip_special_tokens=False), end="", flush=True)
print(f"\nDuration: {time.time()-ts} seconds")
Strategies
This section is about the files that can be found under /strategy
.
Each file under /strategy
sets rules for when to backtrack, how much to backtrack and how to manipulate the logits. Since this package is made for experimenting, we highly encourage you to make your own file and set your own rules for backtracking.
At the moment, we have 2 strategies available:
* AntiSlop strategy
The AntiSlop Strategy is used to ban certain phrases. Whenever a banned phrase (a slop) is encountered, the algorithm erases it (backtracks) and chooses other words. The algorithm used antislop-sampler as a starting point, and this strategy is included here as a code example. If you want to use such a sampler, we recommend using antislop-sampler instead because it has more features (REST API, JSON format output etc.)
* Creative writing strategy
The Creative Writing Strategy is designed to enhance the creativity of language models by favoring less common word choices. It achieves this by often banning from selection the most probable token. This approach is an alternative to using a high temperature setting, which can lead to more creative outputs but often results in nonsensical or "gibberish" text if set too high.
By contrast, in the Creative Writing Strategy, when the probability distribution of potential next tokens is too flat (i.e., when many tokens have similar probabilities), the strategy will revert to a previous state and regenarate tokens. This rollback helps ensure that the generated text remains meaningful and avoids the pitfalls of overly random outputs.
Here is a demo of the Creative Writing Strategy: https://huggingface.co/spaces/Mihaiii/backtrack_sampler_demo
Thanks / credit
- Sam Paech for making antislop-sampler, which was used as a starting point for creating this repo. Some parts of the code are still from the original repo.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file backtrack_sampler-0.0.29.tar.gz
.
File metadata
- Download URL: backtrack_sampler-0.0.29.tar.gz
- Upload date:
- Size: 10.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9585df8edde457d9a120b7eb93a7c74d6bae5f7169cac754300907841032db6c |
|
MD5 | 66003b5a01822b937d3669f3e69ba6f9 |
|
BLAKE2b-256 | ccb3f5b3c5db6c2d49dcd6a2c62a3db04ba646a434663b7818548657318ae8a5 |
File details
Details for the file backtrack_sampler-0.0.29-py3-none-any.whl
.
File metadata
- Download URL: backtrack_sampler-0.0.29-py3-none-any.whl
- Upload date:
- Size: 11.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b521b7637514f0ee87205316c8f5945eef40f4d53443f88ce75500eabaedbbe |
|
MD5 | b0cbb343785776ea0720f237cc5c032b |
|
BLAKE2b-256 | 3422f1302df2731c153e2c2ca69a474ba8ea1bdc75886048aa9facc9af0b916b |