Compile natural language specifications into neural programs that run locally via llama.cpp.
Project description
ProgramAsWeights
Compile natural language specifications into neural programs (.paw files) that run locally.
Programs are stored as weight blobs (KV cache prefix + optional LoRA adapters) interpreted by a small fixed model. No API calls needed at runtime — fully deterministic, local execution.
Installation
pip install programasweights
Quick Start
Run a Program
import programasweights as paw
# Load and run a compiled program
fn = paw.function("program_id_or_path.paw")
result = fn("Contact alice@company.com or bob@example.org")
print(result) # ["alice@company.com", "bob@example.org"]
Compile a Program
import programasweights as paw
# Compile from natural language specification
paw.compile(
"output.paw",
spec="Extract all email addresses from text and return as JSON list",
checkpoint_dir="path/to/trained/compiler",
)
LoRA Support (PEFT Compatible)
Already using PEFT for LoRA training? Convert to .paw in one line:
import programasweights as paw
# Standard PEFT workflow:
# model = get_peft_model(base_model, LoraConfig(r=16, target_modules=["q_proj", "v_proj"]))
# trainer.train()
# model.save_pretrained("my_adapter/")
# Convert to .paw:
paw.from_peft(
"my_adapter/", # Your PEFT checkpoint
"sentiment.paw", # Output .paw file
spec="Classify sentiment as positive or negative",
tags=["sentiment", "classification"],
examples=[
{"input": "Great movie!", "output": "positive"},
{"input": "Terrible film.", "output": "negative"},
],
)
# Use it:
fn = paw.function("sentiment.paw")
print(fn("This is amazing!")) # → "positive"
Load LoRA from a .paw file:
lora_weights, lora_config = paw.load_paw_lora("sentiment.paw")
print(lora_config) # {"rank": 16, "alpha": 32, ...}
Or use save_lora_to_paw() directly if you have raw tensors instead of a PEFT checkpoint.
.paw File Format v2
A .paw file is a self-contained neural program that includes:
| Component | Description | Required |
|---|---|---|
| KV cache prefix | Continuous program (prefix weights) | Optional |
| Pseudo-program | Discrete text instructions | Optional |
| LoRA adapter | Fine-tuned adapter weights | Optional |
| Generation config | Temperature, top_p, max_tokens | Optional |
| Metadata | Interpreter model, spec, author, tags | Required |
Program Hub
Browse and share programs at hub.programasweights.com
Links
- Website: programasweights.com
- Documentation: programasweights.readthedocs.io
- GitHub: github.com/programasweights/programasweights
- Program Hub: hub.programasweights.com
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file programasweights-0.1.0.dev6.tar.gz.
File metadata
- Download URL: programasweights-0.1.0.dev6.tar.gz
- Upload date:
- Size: 7.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f6f91a7dcf07ede0a597335a3cbbc4b4eb373336c91d3cb889d924a4c30120d3
|
|
| MD5 |
304735557a6be8fd98d2fc4bd5a79073
|
|
| BLAKE2b-256 |
a8cbafa0012122f56ccdc4c3adf142d960d736110aae2d528ea4e36f349c99d8
|
File details
Details for the file programasweights-0.1.0.dev6-py3-none-any.whl.
File metadata
- Download URL: programasweights-0.1.0.dev6-py3-none-any.whl
- Upload date:
- Size: 33.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
afbad309d5e09fc751bf5c1bf24f84c0d79874e8fcdc312dc5c24ee92e00292b
|
|
| MD5 |
fe1bdeafbd0b9552c99261b13ad12a19
|
|
| BLAKE2b-256 |
6211ae012e652f690b6e89ca659a871278bf56e49a0d7703f61a9684424a4dcf
|