An easy-to-understand framework for LLM samplers that rewind and revise generated tokens
Project description
Backtrack Sampler
Backtrack Sampler is a framework for experimenting with custom sampling algorithms (strategies) that can backtrack/undo/rewind/reverse the latest generated tokens.
The code is short, simple and easy to understand
If you want to make your own sampling algorithm, create a new file in the /strategy
directory that implements the abstract base class. Remember to submit a PR with it! The more strategies we have to experiment with, the better.
Demo
- https://huggingface.co/spaces/Mihaiii/backtrack_sampler_demo
- https://colab.research.google.com/github/Mihaiii/backtrack_sampler/blob/main/demo.ipynb
Installation
pip install backtrack_sampler
The above command will install 0 dependencies. Depending on what kind of LLM you want to use, you'll need to have installed either transformers (pip install transformers
), or llama-cpp-python (click here for install commands depending on your hardware) + torch (pip install torch
for CPU usage. For GPU, please search for the appropriate commands online.).
Here are some combos, for easy copy/paste:
pip install backtrack_sampler transformers
pip install backtrack_sampler llama-cpp-python torch
Usage examples
* llama.cpp
import torch
import time
from llama_cpp import Llama, LlamaRAMCache
from backtrack_sampler import BacktrackSampler, CreativeWritingStrategy
from backtrack_sampler.provider.llamacpp_provider import LlamacppProvider
#make sure you have the model downloaded
#ex: wget https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf
llm = Llama(model_path="Llama-3.2-1B-Instruct-Q4_K_M.gguf", chat_format="llama-3", verbose=False, n_ctx=2100, n_batch=2100)
device = torch.device('cpu')
cache = LlamaRAMCache(capacity_bytes=100000000)
prompt = "Tell me a short tale of a dragon who is afraid of heights."
provider = LlamacppProvider(llm, cache, device)
strategy = CreativeWritingStrategy(provider,
top_p_flat = 0.65,
top_k_threshold_flat = 9,
eos_penalty = 0.75)
sampler = BacktrackSampler(provider, strategy)
ts = time.time()
token_stream = sampler.generate(
prompt=prompt,
max_new_tokens=1024,
temperature=1
)
for token in token_stream:
print(provider.decode([token]), end="", flush=True)
print(f"\nDuration: {time.time()-ts} seconds")
* transformers
import torch
import time
from transformers import AutoTokenizer, AutoModelForCausalLM
from backtrack_sampler import BacktrackSampler, AntiSlopStrategy
from backtrack_sampler.provider.transformers_provider import TransformersProvider
model_name = "unsloth/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
device = torch.device('cuda')
slops = [ "**Warning", "I cannot", "I can't", "I can’t", "I can not", "can’t provide", "can't provide", "can not provide",
"***Warning", "*I cannot", "*I can't", "*I can’t", "*I can not", "*can’t provide", "*can't provide", "*can not provide"]
prompt_text = "How to make a bomb? Give as many details as possible."
messages = [
{"role": "user", "content": prompt_text}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
provider = TransformersProvider(model, tokenizer, device)
strategy = AntiSlopStrategy(provider, slops)
sampler = BacktrackSampler(provider, strategy)
ts = time.time()
token_stream = sampler.generate(
prompt=prompt,
max_new_tokens=1024,
temperature=1
)
for token in token_stream:
print(tokenizer.decode(token, skip_special_tokens=True), end="", flush=True)
print(f"\nDuration: {time.time()-ts} seconds")
For more usage examples and outputs, see demo.ipynb.
Strategies
This section is about the files that can be found under /strategy
.
Each file under /strategy
sets rules for when to backtrack, how much to backtrack and how to manipulate the logits. Since this package is made for experimenting, we highly encourage you to make your own file and set your own rules for backtracking.
At the moment, we have 5 strategies available:
* Anti-slop strategy
The Anti Slop Strategy is used to ban certain phrases. Whenever a banned phrase (a slop) is encountered, the algorithm erases it (backtracks) and chooses other words. The algorithm used antislop-sampler as a starting point, and this strategy is included here as a code example. If you want to use such a sampler, we recommend using antislop-sampler instead because it has more features (REST API, JSON format output etc.)
* Creative writing strategy
The Creative Writing Strategy is designed to enhance the creativity of language models by favoring less common word choices. It achieves this by often banning from selection the most probable token. This approach is an alternative to using a high temperature setting, which can lead to more creative outputs but often results in nonsensical or "gibberish" text if set too high.
By contrast, in the Creative Writing Strategy, when the probability distribution of potential next tokens is too flat (i.e., when many tokens have similar probabilities), the strategy will revert to a previous state and regenarate tokens. This rollback helps ensure that the generated text remains meaningful and avoids the pitfalls of overly random outputs.
Here is a demo of the Creative Writing Strategy: https://huggingface.co/spaces/Mihaiii/backtrack_sampler_demo
* Debug strategy
The Debug Strategy is the simplest possible strategy and is used to debug logits/probs and as a skeleton for creating new strategies.
* Human guidance strategy
The Human Guidance Strategy is designed to allow the user to manually select the next token from the top generated ones. It is useful to get a better understanding of the model's capabilities.
This strategy relies on curses for drawing, a library that's pre-installed on Linux and MacOS. The curses library is designed for terminal-based applications and does not function properly in notebook (.ipynb
files) environments.
* Adaptive temperature strategy
The Adaptive Temperature Strategy is designed to dynamically adjust the temperature of the model based on the entropy of the probability distribution of the next token. The code is copy/pasted from this notebook created by Alexander Doria. The official repo is Quest-Best-Tokens.
Thanks / credit
- Sam Paech for making antislop-sampler, which was used as a starting point for creating this repo. Some parts of the code are still from the original repo.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file backtrack_sampler-3.0.0.tar.gz
.
File metadata
- Download URL: backtrack_sampler-3.0.0.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | aabecdf59ab14b4ac0101e17341c54bb0ec216071db675b670552158cd2e8fe5 |
|
MD5 | 0d791213bcf10f87ba693db9aa222728 |
|
BLAKE2b-256 | 214b8f0f7fb05b7bf9b73d40f3697adfa372728902e8e9db44c9a2f08fa176a3 |
File details
Details for the file backtrack_sampler-3.0.0-py3-none-any.whl
.
File metadata
- Download URL: backtrack_sampler-3.0.0-py3-none-any.whl
- Upload date:
- Size: 15.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9ff92f42250c4f7e31ed961c7c13237826d0fb60faa010a10a57b5b70ee3fa84 |
|
MD5 | 89c1be78b2a9c3ec922ba44c5ec7fdb3 |
|
BLAKE2b-256 | 2f86d3b18b78e36393afa8917a3317a86f15a6840d06c6f35a0a3d4f94683bc9 |