Skip to main content

Compile natural language specifications into neural programs that run locally via llama.cpp.

Project description

ProgramAsWeights

Compile natural language specs into tiny neural functions that run locally.

Define what a function should do in plain English. PAW compiles it into a small neural program that runs on your machine — no API keys at runtime, no internet needed after setup, fully deterministic.

Install

pip install programasweights --extra-index-url https://pypi.programasweights.com/simple/

Quick Start

import programasweights as paw

# Use a pre-compiled function (downloads once, runs locally forever)
fn = paw.function("email-triage")
fn("Urgent: the server is down!")        # "immediate"
fn("Newsletter: spring picnic")          # "wait"

# Compile your own from a description
program = paw.compile(
    "Fix malformed JSON: repair missing quotes and trailing commas",
    slug="json-fixer"              # optional: creates username/json-fixer handle
)
fn = paw.function(program.slug)    # or paw.function(program.id)
fn("{name: 'Alice',}")  # '{"name":"Alice"}'

# Or compile and load in one step
fn = paw.compile_and_load("Classify sentiment as positive or negative")
fn("I love this!")  # "positive"

If you specifically want the smaller browser-compatible runtime, pass compiler="paw-4b-gpt2". Otherwise, omit compiler and let the server default decide.

Current Public Compilers

Standard (Qwen3 0.6B) Compact (GPT-2 124M)
Compiler name paw-4b-qwen3-0.6b paw-4b-gpt2
Accuracy Higher Lower
Base model size 594 MB 134 MB
Program size ~22 MB ~5 MB
Local inference ~0.05-0.5s per call ~0.03-0.3s per call
Runs in browser No Yes (WebAssembly)

The current server default is Standard (paw-4b-qwen3-0.6b). Use Compact (paw-4b-gpt2) when you need smaller files or browser deployment.

If you need to inspect available compiler aliases programmatically, use paw.list_compilers().

GPU acceleration is enabled by default (Metal on Mac, CUDA on Linux, falls back to CPU). Set PAW_GPU_LAYERS=0 to force CPU if GPU causes issues.

Browser SDK

Programs compiled with GPT-2 also run in the browser via WebAssembly. The initial model and program assets download automatically; inference then runs client-side.

npm install @programasweights/web
import paw from '@programasweights/web';

const fn = await paw.function('email-triage-browser');
const result = await fn('Urgent: the server is down!');
// result: "immediate"

If you load by program ID, browser inference only depends on Hugging Face-hosted assets. Slugs still need one PAW API lookup.

New browser-compatible programs are uploaded to Hugging Face asynchronously after compile. They are usually ready within a minute or two, but under load can take a few minutes, so a freshly compiled browser program may need a short wait before the JS SDK can load it.

See the browser SDK repo for full documentation.

Use with AI Agents

PAW works with Cursor, Claude, Codex, and other AI coding assistants. Paste this into your agent's chat:

I want to use ProgramAsWeights (PAW) to create fuzzy text functions that run locally. Read the instructions at https://programasweights.com/AGENTS.md and help me integrate it.

Or save [AGENTS.md](https://programasweights.com/agents) to your project root — agents read it automatically.

When to Use PAW

  • Fuzzy search — typo-tolerant matching, semantic search, near-duplicate detection
  • Format repair — fix broken JSON, normalize dates, repair malformed inputs
  • Classification — sentiment, urgency, categories defined in your own words
  • Extraction — emails, names, dates from messy unstructured text
  • Log triage — extract errors from verbose output, filter noise
  • Intent routing — map user descriptions to the closest URL, menu item, or setting
  • Agent preprocessing — parse tool calls, validate outputs, route tasks

Authentication

# Option 1: environment variable (recommended)
export PAW_API_KEY=paw_sk_...

# Option 2: CLI login (opens browser to generate key)
paw login

Generate API keys at programasweights.com/settings. Authenticated users get higher rate limits.

CLI

paw compile --spec "Extract error lines from logs" --json
paw run --program <program_id> --input "[ERROR] timeout" --json
paw login

--json gives structured output for programmatic use.

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

programasweights-0.4.2.tar.gz (79.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

programasweights-0.4.2-py3-none-any.whl (42.1 kB view details)

Uploaded Python 3

File details

Details for the file programasweights-0.4.2.tar.gz.

File metadata

  • Download URL: programasweights-0.4.2.tar.gz
  • Upload date:
  • Size: 79.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for programasweights-0.4.2.tar.gz
Algorithm Hash digest
SHA256 81f62161470c219e01834fcadbf2759cc775168f1f27b5d66cde5a0ba708df66
MD5 b2507cdccd388282f042db9ffae249ca
BLAKE2b-256 bd5486d43a525db8dca61fc245bc8e7fe47575859cd1dcbbf524144bd35193e6

See more details on using hashes here.

File details

Details for the file programasweights-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for programasweights-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 139237bc2187ea6db53572c6189a499adf01b5e98ec8e0d400453f3368578a07
MD5 3bb0a7ecd0325160064b785fc53f8db8
BLAKE2b-256 945a582068bf931337004fcd852281fd5b008596970f3762e15e964ca129bcfe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page