Skip to main content

Load and run BCLM language models with PyTorch.

Project description

BCLM-PyTorch Documentation

bclm-pytorch is the official Python library for loading and running inference with BCLM language models.

Installation

pip install bclm-pytorch

Dependencies installed automatically: torch, safetensors, tokenmonster, huggingface_hub.

Quick Start

import bclm

# Loads a model from HuggingFace
model = bclm.load("bclm-1-small-preview")

# Multi-turn chat
chat = model.chat(system_prompt="You are a helpful assistant.")
print(chat.send("What is 2+2?"))
print(chat.send("Why is that the case?"))

# Streaming chat (real-time token output)
for chunk in chat.send_stream("Tell me a short story about a cat."):
    print(chunk, end="", flush=True)
print()

Loading Models

bclm.load() is the single entry point for all model sources.

From Hugging Face

# Short form — resolves to huggingface.co/bclm/bclm-1-small-preview
model = bclm.load("bclm-1-small-preview")

# Explicit repo ID
model = bclm.load("bclm/bclm-1-small-preview")

# Any HF repo
model = bclm.load("your-org/your-model")

From a Local Directory

Point to a directory containing config.json and model.safetensors:

model = bclm.load("/path/to/model/directory")

From a URL

Point to a directory served over HTTPS that contains config.json and model.safetensors:

model = bclm.load("https://example.com/models/bclm-1-small/")

Options

model = bclm.load(
    "bclm-1-small-preview",
    device="cuda",          # "cpu", "cuda", "cuda:0", etc. (default: auto)
    dtype=torch.bfloat16,   # torch.float16, torch.float32 (default: bf16 on GPU, f32 on CPU)
    compile=False,          # Enable torch.compile for inference (default: False)
)
Parameter Default Description
device auto "cpu", "cuda", or a specific device string. Auto-selects CUDA if available.
dtype auto torch.bfloat16 on CUDA, torch.float32 on CPU. Weights ship as float16 but were trained in bfloat16.
compile False Wrap the model with torch.compile. Off by default to avoid warmup latency.

Chat Interface

The chat interface supports multi-turn conversations with automatic history management.

Basic Usage

chat = model.chat(
    system_prompt="You are a helpful assistant.",  # optional
    max_new_tokens=512,
    temperature=1.0,
    top_k=50,
    top_p=0.9,
    repetition_penalty=1.15,
    frequency_penalty=0.1,
)

# Each call appends to the conversation history
response = chat.send("Hello, who are you?")
print(response)

follow_up = chat.send("Can you elaborate?")
print(follow_up)

Streaming

for chunk in chat.send_stream("Write a poem about the ocean."):
    print(chunk, end="", flush=True)
print()

Per-Message Overrides

Override generation parameters for a single message without changing the session defaults:

response = chat.send(
    "Give me a one-word answer: is the sky blue?",
    max_new_tokens=10,
    temperature=0.1,
    repetition_penalty=1.0,   # disable for this message
    frequency_penalty=0.0,
)

History Management

# View conversation history
for msg in chat.messages:
    print(f"{msg['role']}: {msg['content'][:80]}")

# Clear history (keeps system prompt)
chat.clear()

Interactive CLI

Launch a blocking interactive chat loop in the terminal:

chat.interactive()

Commands inside the interactive loop:

  • /clear — reset conversation
  • /history — print message list
  • /help — show commands
  • Ctrl-C — exit

Text Completion

For non-chat use cases (continuing a text prompt):

text = model.complete("Once upon a time", max_tokens=200, temperature=0.8)
print(text)

Streaming Completion

for chunk in model.complete_stream("The quick brown fox", max_tokens=100):
    print(chunk, end="", flush=True)
print()

Generation Parameters

All generation methods accept these parameters:

Parameter Default Description
max_new_tokens 512 Maximum number of tokens to generate.
temperature 1.0 Sampling temperature. 0 = greedy, higher = more random.
top_k 50 Restrict sampling to the top-k most likely tokens. None to disable.
top_p 0.9 Nucleus sampling threshold. None to disable.
repetition_penalty 1.15 Multiplicative penalty applied to every token already present in the context. Positive logits are divided by the penalty; negative logits are multiplied — both directions make the token less likely. 1.0 disables. Defaults are tuned for small language models, which are more prone to degenerate repetition.
frequency_penalty 0.1 Additive penalty proportional to how many times each token has appeared. Logits are reduced by frequency_penalty × count, discouraging high-frequency tokens more strongly than low-frequency ones. 0.0 disables.

Model Information

model = bclm.load("bclm-1-small-preview")

# Architecture name (e.g., "BCLM1Model")
print(model.architecture)

# Model config object
print(model.config)

# Parameter count
print(f"{model.num_parameters / 1e6:.1f}M parameters")

# Device and dtype
print(model.device, model.dtype)

Advanced: Direct Access

For advanced use cases, you can access the underlying PyTorch modules:

model = bclm.load("bclm-1-small-preview")

# Raw nn.Module (e.g., BCLM1Model)
raw = model.raw_model

# Inference wrapper (e.g., BCLM1ForGeneration)
gen = model.generator

# Tokenizer
tok = model.tokenizer

Tokenizer

The current tokenizer backend is TokenMonster. The tokenizer spec is embedded in each model's config.json (e.g., "tokenmonster:english-32000-consistent-v1"), so the correct tokenizer is loaded automatically.

tok = model.tokenizer

ids = tok.encode("Hello world")
text = tok.decode(ids)

Environment Variables

Variable Description
BCLM_TOKENIZER Override tokenizer spec (e.g., tokenmonster:english-32000-consistent-v1).
BCLM_TOKENMONSTER_DIR Custom cache directory for TokenMonster vocab files.

Config Format

Model directories must contain a config.json:

{
  "architecture": "BCLM1Model",
  "model": {
    "vocab_size": 32768,
    "tokenizer": "tokenmonster:english-32000-consistent-v1",
    "embed_dim": 384,
    "n_layers": 12,
    "max_seq_len": 16384,
    "dropout": 0.0,
    "attn_heads": 6,
    "attn_kv_heads": 2,
    "local_attn_layers": [1, 5, 7, 11],
    "global_attn_layers": [3, 9],
    "attn_window_size": 1024,
    "conv_kernel_size": 4,
    "osc_n_pairs": 1,
    "osc_n_real": 16,
    "osc_clamp_min_decay": 1e-05,
    "bigram_table_factor": 5
  }
}

The "architecture" field determines which model class is instantiated. Weights should be in model.safetensors (safetensors format).

Error Handling

import bclm

try:
    model = bclm.load("nonexistent-model")
except FileNotFoundError:
    print("Model not found")
except ValueError as e:
    print(f"Invalid model: {e}")
except ImportError as e:
    print(f"Missing dependency: {e}")

Requirements

  • Python ≥ 3.9
  • PyTorch ≥ 2.1
  • safetensors
  • tokenmonster
  • huggingface_hub

Optional:

  • xformers — enables optimized attention kernels on CUDA

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bclm_pytorch-1.0.6.tar.gz (37.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bclm_pytorch-1.0.6-py3-none-any.whl (35.2 kB view details)

Uploaded Python 3

File details

Details for the file bclm_pytorch-1.0.6.tar.gz.

File metadata

  • Download URL: bclm_pytorch-1.0.6.tar.gz
  • Upload date:
  • Size: 37.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bclm_pytorch-1.0.6.tar.gz
Algorithm Hash digest
SHA256 3d5f06623ab25a6ddd7f56edf2b525eb524211d1e18e9c28ad0152ea475452ac
MD5 e508fb96e8d7ddec93a240b45df81c45
BLAKE2b-256 73212eb1cdd5d23b53ef14af710b876e64108526e05d02715d72790ee8840618

See more details on using hashes here.

File details

Details for the file bclm_pytorch-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: bclm_pytorch-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 35.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bclm_pytorch-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 a9577c69e96bf205665493179932c700b53c94b4a38f29a59f2d3f7af7089c83
MD5 359316c3481a463c942549d6cbd81e91
BLAKE2b-256 15b97fb16172ebff73f591c62c9bb939e1bed2f49b77e60d2d1c750fde0bdba9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page