Compile natural language specifications into neural programs that run locally via llama.cpp.
Project description
ProgramAsWeights
Compile natural language specs into tiny neural functions that run locally.
Define what a function should do in plain English. PAW compiles it into a small neural program that runs on your machine — no API keys at runtime, no internet needed after setup, fully deterministic.
Install
pip install programasweights --extra-index-url https://pypi.programasweights.com/simple/
Quick Start
import programasweights as paw
# Use a pre-compiled function (downloads once, runs locally forever)
fn = paw.function("email-triage")
fn("Urgent: the server is down!") # "immediate"
fn("Newsletter: spring picnic") # "wait"
# Compile your own from a description
program = paw.compile(
"Fix malformed JSON: repair missing quotes and trailing commas",
slug="json-fixer" # optional: creates username/json-fixer handle
)
fn = paw.function(program.slug) # or paw.function(program.id)
fn("{name: 'Alice',}") # '{"name":"Alice"}'
# Or compile and load in one step
fn = paw.compile_and_load("Classify sentiment as positive or negative")
fn("I love this!") # "positive"
If you specifically want the smaller browser-compatible runtime, pass compiler="paw-4b-gpt2". Otherwise, omit compiler and let the server default decide.
Current Public Compilers
| Standard (Qwen3 0.6B) | Compact (GPT-2 124M) | |
|---|---|---|
| Compiler name | paw-4b-qwen3-0.6b |
paw-4b-gpt2 |
| Accuracy | Higher | Lower |
| Base model size | 594 MB | 134 MB |
| Program size | ~22 MB | ~5 MB |
| Local inference | ~0.05-0.5s per call | ~0.03-0.3s per call |
| Runs in browser | No | Yes (WebAssembly) |
The current server default is Standard (paw-4b-qwen3-0.6b). Use Compact (paw-4b-gpt2) when you need smaller files or browser deployment.
If you need to inspect available compiler aliases programmatically, use paw.list_compilers().
GPU acceleration is enabled by default (Metal on Mac, CUDA on Linux, falls back to CPU). Set PAW_GPU_LAYERS=0 to force CPU if GPU causes issues.
Browser SDK
Programs compiled with GPT-2 also run in the browser via WebAssembly. The initial model and program assets download automatically; inference then runs client-side.
npm install @programasweights/web
import paw from '@programasweights/web';
const fn = await paw.function('email-triage-browser');
const result = await fn('Urgent: the server is down!');
// result: "immediate"
If you load by program ID, browser inference only depends on Hugging Face-hosted assets. Slugs still need one PAW API lookup.
New browser-compatible programs are uploaded to Hugging Face asynchronously after compile. They are usually ready within a minute or two, but under load can take a few minutes, so a freshly compiled browser program may need a short wait before the JS SDK can load it.
See the browser SDK repo for full documentation.
Use with AI Agents
PAW works with Cursor, Claude, Codex, and other AI coding assistants. Paste this into your agent's chat:
I want to use ProgramAsWeights (PAW) to create fuzzy text functions that run locally. Read the instructions at https://programasweights.com/AGENTS.md and help me integrate it.
Or save [AGENTS.md](https://programasweights.com/agents) to your project root — agents read it automatically.
When to Use PAW
- Fuzzy search — typo-tolerant matching, semantic search, near-duplicate detection
- Format repair — fix broken JSON, normalize dates, repair malformed inputs
- Classification — sentiment, urgency, categories defined in your own words
- Extraction — emails, names, dates from messy unstructured text
- Log triage — extract errors from verbose output, filter noise
- Intent routing — map user descriptions to the closest URL, menu item, or setting
- Agent preprocessing — parse tool calls, validate outputs, route tasks
Authentication
# Option 1: environment variable (recommended)
export PAW_API_KEY=paw_sk_...
# Option 2: CLI login (opens browser to generate key)
paw login
Generate API keys at programasweights.com/settings. Authenticated users get higher rate limits.
CLI
paw compile --spec "Extract error lines from logs" --json
paw run --program <program_id> --input "[ERROR] timeout" --json
paw login
--json gives structured output for programmatic use.
Links
- Website: programasweights.com
- Documentation: programasweights.readthedocs.io
- Python SDK: github.com/programasweights/programasweights-python
- Browser SDK: github.com/programasweights/programasweights-js
- Program Hub: programasweights.com/hub
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file programasweights-0.4.1.tar.gz.
File metadata
- Download URL: programasweights-0.4.1.tar.gz
- Upload date:
- Size: 80.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6be96548e7755d3a735b146172126196f927fa943c6f6deef23fb5e3e98cea44
|
|
| MD5 |
6ad0c3c42ff781c5698c32c044dd51c2
|
|
| BLAKE2b-256 |
e893478b0187ba58a3a2529a6346fe9574d1acde10a98acef9bcd4e83f6e2dec
|
File details
Details for the file programasweights-0.4.1-py3-none-any.whl.
File metadata
- Download URL: programasweights-0.4.1-py3-none-any.whl
- Upload date:
- Size: 42.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8a1edf07fe84c9397d3c56aecc98ed5ee4a4bf54d97a07699e936ae2044b559b
|
|
| MD5 |
a54f316120e1b7ec66c708c53ff5d6e8
|
|
| BLAKE2b-256 |
d81d096e99747cac22bdc34013bbd3bc81ed2d9387bb9f512ccb62b3aebee64a
|