Skip to main content

A package for anchored decoding

Project description

⚓ Anchored Decoding: ⚓
Provably Reducing Copyright Risk for Any Language Model

Paper     TinyComma 1.8B     Demo

Jacqueline He  ·   Jonathan Hayase  ·   Wen-tau Yih  ·   Sewoong Oh  ·   Luke Zettlemoyer  ·   Pang Wei Koh



    

Anchored Decoding is a decoding strategy for mitigating the reproduction of copyrighted material by language models. It combines a safe model trained exclusively on permissively licensed text with a risky model trained on mixed-license data. At each step, it computes a fused next-token distribution that stays within a user-specified global information budget relative to the safe model, while remaining as close as possible to the risky model. Anchored Decoding can also handle model pairs with mismatched tokenizers via byte-level decoding. Across token- and byte-level settings on six model pairs, Anchored Decoding achieves the strongest trade-off between copyright risk and generation utility.

If you find our work helpful, please cite us as:

@article{he2026anchored,
  title={{Anchored Decoding: Provably Reducing Copyright Risk for Any Language Model}},
  author={Jacqueline He and Jonathan Hayase and Wen-tau Yih and Sewoong Oh and Luke Zettlemoyer and Pang Wei Koh},
  journal={arXiv preprint},
  year={2026}
}

Installation

Please install uv and clone this repository. Inside this repo, please run:

uv pip install -e .

Try out anchored decoding!

Demo
Interactive demo of (token-level) anchored decoding using TinyComma 1.8B and Llama 3.1 70B.
The prompt is the opening line of George Orwell's 1984 (1949).

We provide an interactive Python script where you can type in input and get responses via anchored decoding!

# k (float, > 0): per-step user-controlled information budget allocation.
# Taking T as the maximum new tokens, you can think of kT = K, where K is the global information budget for the entire generation. 
# Larger k -> greater reliance on the risky model (less anchoring to the safe model).

# token-level decoding 
uv run python test.py --safe jacquelinehe/tinycomma-1.8b-llama3-tokenizer --risky meta-llama/Llama-3.1-70B --k_radius 1.5 

# byte-level decoding; good for LM pairs with incompatible tokenizers
uv run python test_byte.py --safe common-pile/comma-v0.1-2t --risky meta-llama/Llama-3.1-70B --k_radius 0.5

Please run python test.py --help (or python test_byte.py --help) for more argument options. For best throughput, we recommend using two GPUs so the models can run in parallel, which reduces the overhead of two-model decoding. In our experiments, we use NVIDIA 140GiB H200s to fit our 70B-109B risky models. However, we also provide a Google Colab notebook demo (using Llama 3.1 8B / Qwen 2.5 72B as risky models) that runs on a single A100 GPU.

Code Acknowledgments

Our byte-level decoding infrastructure relies extensively on ByteSampler (Hayase et al., 2025). The original repository can be found here.

Our Python code is formatted with Code style: black.

License

The repository software is licensed under the Apache 2.0 License. See the LICENSE file for details.

Troubleshooting or Questions?

If you have any questions relating to either the code or paper, feel free to contact Jacqueline at jyyh@cs.washington.edu or open an issue in this repo!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anchoreddecode-0.1.0.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

anchoreddecode-0.1.0-py3-none-any.whl (61.4 kB view details)

Uploaded Python 3

File details

Details for the file anchoreddecode-0.1.0.tar.gz.

File metadata

  • Download URL: anchoreddecode-0.1.0.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for anchoreddecode-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6838dabc5e5c712dcc5b7190ee9ab568605fedaaa1da52c85a0e66c88f6a1e38
MD5 2fe998388925af180562f97dbbae281f
BLAKE2b-256 45f9ee4a899f1f46a17985a4c7c93f77db25ac3340a699fd54acd73221405c53

See more details on using hashes here.

File details

Details for the file anchoreddecode-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: anchoreddecode-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 61.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for anchoreddecode-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 904f044aee562bc6621a15159688f22f5407bec4abd0ca99842696a6b05eecf2
MD5 31ac809718353c8093bd1cf5cd774d82
BLAKE2b-256 8b7c6c1f7f7b9699d6ce8895ca7bd3d320518bc6a0b2f7d06243821ff238fb5f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page