Skip to main content

No project description provided

Project description

Logo

Docs Tests codecov PyPI

GenLM Backend is a high-performance backend for language model probabilistic programs, built for the GenLM ecosystem. It provides an asynchronous and autobatched interface to vllm and transformers language models, enabling scalable and efficient inference.

See our documentation.

🚀 Key Features

  • Automatic batching of concurrent log-probability requests, enabling efficient large-scale inference without having to write batching logic yourself
  • Byte-level decoding of transformers tokenizers, enabling advanced token-level control
  • Support for arbitrary Hugging Face models (e.g., LLaMA, DeepSeek, etc.) with fast inference and automatic KV caching using vllm
  • NEW: support for MLX-LM library, allowing faster inference on Apple silicon devices.

⚡ Quick Start

This library supports installation via pip:

pip install genlm-backend

Or to install with MLX support, run:

pip install genlm-backend[mlx]

Or to install with LoRA support, run:

pip install genlm-backend[lora]

🧪 Example: Autobatched Sequential Importance Sampling with LLMs

This example demonstrates how genlm-backend enables concise, scalable probabilistic inference with language models. It implements a Sequential Importance Sampling (SIS) algorithm that makes asynchronous log-probabality requests which get automatically batched by the language model.

import torch
import asyncio
from genlm.backend import load_model_by_name

# --- Token-level masking using the byte-level vocabulary --- #
def make_masking_function(llm, max_token_length, max_tokens):
    eos_id = llm.tokenizer.eos_token_id
    valid_ids = torch.tensor([
        token_id == eos_id or len(token) <= max_token_length
        for token_id, token in enumerate(llm.byte_vocab)
    ], dtype=torch.float).log()
    eos_one_hot = torch.nn.functional.one_hot(
        torch.tensor(eos_id), len(llm.byte_vocab)
    ).log()

    def masking_function(context):
        return eos_one_hot if len(context) >= max_tokens else valid_ids

    return masking_function

# --- Particle class for SIS --- #
class Particle:
    def __init__(self, llm, mask_function, prompt_ids):
        self.context = []
        self.prompt_ids = prompt_ids
        self.log_weight = 0.0
        self.active = True
        self.llm = llm
        self.mask_function = mask_function

    async def extend(self):
        logps = await self.llm.next_token_logprobs(self.prompt_ids + self.context)
        masked_logps = logps + self.mask_function(self.context).to(logps.device)
        logZ = masked_logps.logsumexp(dim=-1)
        self.log_weight += logZ
        next_token_id = torch.multinomial((masked_logps - logZ).exp(), 1).item()
        if next_token_id == self.llm.tokenizer.eos_token_id:
            self.active = False
        else:
            self.context.append(next_token_id)

# --- Autobatched SIS loop --- #
async def autobatched_sis(n_particles, llm, masking_function, prompt_ids):
    particles = [Particle(llm, masking_function, prompt_ids) for _ in range(n_particles)]
    while any(p.active for p in particles):
        await asyncio.gather(*[p.extend() for p in particles if p.active])
    return particles

# --- Run the example --- #
llm = load_model_by_name("gpt2") # or e.g., "meta-llama/Llama-3.2-1B" if you have access
mask_function = make_masking_function(llm, max_token_length=10, max_tokens=10)
prompt_ids = llm.tokenizer.encode("Montreal is")
particles = await autobatched_sis( # use asyncio.run(autobatched_sis(...)) if you are not in an async context
    n_particles=10, llm=llm, masking_function=mask_function, prompt_ids=prompt_ids
)

strings = [llm.tokenizer.decode(p.context) for p in particles]
log_weights = torch.tensor([p.log_weight for p in particles])
probs = torch.exp(log_weights - log_weights.logsumexp(dim=-1))

for s, p in sorted(zip(strings, probs), key=lambda x: -x[1]):
    print(f"{repr(s)} (probability: {p:.4f})")

This example highlights the following features:

  • 🌀 Asynchronous Inference Loop. Each particle runs independently, but all LLM calls are scheduled concurrently via asyncio.gather. The backend batches them automatically, so we get the efficiency of large batched inference without having to write the batching logic.
  • 🔁 Byte-level Tokenization Support. Token filtering is done using the model’s byte-level vocabulary, which genlm-backend exposes. This enables low-level control over generation in ways not possible with most high-level APIs.

Development

See the DEVELOPING.md file for information on how to install the project for local development.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

genlm_backend-0.1.8.tar.gz (3.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

genlm_backend-0.1.8-py3-none-any.whl (36.9 kB view details)

Uploaded Python 3

File details

Details for the file genlm_backend-0.1.8.tar.gz.

File metadata

  • Download URL: genlm_backend-0.1.8.tar.gz
  • Upload date:
  • Size: 3.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for genlm_backend-0.1.8.tar.gz
Algorithm Hash digest
SHA256 40009bf8ea29dc749e4e44de451e7ba5b8796ac02caf73de4bd8e42ffb034b68
MD5 b843c3426b0fa8e616a2c34018f76b5c
BLAKE2b-256 e708b016d6856723ca9ff4640a9ebe42d07283569bb796f7b888fc60620e3e73

See more details on using hashes here.

File details

Details for the file genlm_backend-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: genlm_backend-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 36.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for genlm_backend-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 a35a1e91bd07c046bda2ed9139a4afa369ebf97592883bbf5984d12780749556
MD5 aca5ec3ce265b59a5f47604fb53f9e77
BLAKE2b-256 6d23378dfa23a735081e9ea0965fe6027a423dd5d81098f7527f8d46d2a46f9d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page