Zero-dependency programming — replace library imports with LLM-generated, verified code
Project description
conjure
Zero-dependency programming — replace library imports with LLM-generated, verified code.
Every library you import is attack surface you didn't write. Conjure replaces thousands of transitive dependencies with one model binary and human-readable YAML specs.
Traditional app:
App → pip install → 200 packages → 1500 transitive deps → 8M LOC of stranger code
Conjure app:
App → .yaml spec files → embedded LLM → verified code → cached
Trust chain: one model file (checksummed)
Install
pip install conjure-llm
Quick Start
import conjure
# First call: generates, verifies, and caches (~10s)
result = conjure.invoke("base64_encode", data="hello")
print(result) # "aGVsbG8="
# Second call: cache hit (0.3ms)
result = conjure.invoke("base64_encode", data="world")
How It Works
- Write a YAML spec describing what you need:
spec: edit_distance
version: 1.0.0
description: |
Compute the Levenshtein edit distance between two strings.
function: levenshtein
input:
s1: str
s2: str
output: int
examples:
- input: { s1: "kitten", s2: "sitting" }
output: 3
- input: { s1: "", s2: "hello" }
output: 5
constraints:
no_imports: true
max_lines: 400
- Conjure generates a self-contained Python implementation (no imports), verifies it against your examples, and caches it:
result = conjure.invoke("levenshtein", s1="kitten", s2="sitting")
# Returns: 3
The generated code is:
- Import-free — enforced by AST analysis, not heuristics
- Verified — must pass all spec examples before caching
- Sandboxed — runs with restricted builtins and timeouts
- Cached — content-addressed by spec + model + seed (sub-ms on repeat calls)
Results
On ConjureEval-100 (100 specs across 20 categories):
| Metric | Rate |
|---|---|
| pass@1 (first attempt) | 70.0% |
| pass@3 (best of 3 attempts) | 87.9% |
The retry pipeline recovers ~18% more specs by giving the model its own error messages.
Attack Surface Reduction
Across 5 real Python applications:
| Application | Dependencies | Conjure | Reduction |
|---|---|---|---|
| Flask blog | 13 transitive | 0 | 15× LOC |
| FastAPI service | 15 transitive | 0 | 17× LOC |
| CLI tool | 5 transitive | 0 | 6× LOC |
| Web scraper | 17 transitive | 0 | 20× LOC |
| File sync | 8 transitive | 0 | 9× LOC |
Translation Feasibility
52% of a typical Flask application's imports can be automatically replaced by Conjure specs.
CLI
# Invoke a function
conjure invoke base64_encode -k data=hello
# Pre-generate all specs
conjure build --spec-dir specs/stdlib
# Evaluate specs
conjure eval --spec-dir specs/stdlib --output results.md
# Analyze a project for translation
conjure translate my_project/ --output conjure_specs/
# Compare attack surface
conjure audit --packages "flask,requests,pyyaml" --specs specs/stdlib
Model
Conjure uses Qwen3.5-9B-OptiQ-4bit via MLX on Apple Silicon:
- 5 GB model memory (4-bit mixed-precision quantization)
- Instruct mode with recommended sampling (temp=0.6, top_p=0.95)
- Runs entirely on-device — no API keys, no cloud, no network
Included Specs
30 curated stdlib specs covering common library patterns:
| Category | Specs |
|---|---|
| Encoding | base64_encode, base64_decode, hex_encode, hex_decode, rot13 |
| String | slugify, camel_to_snake, snake_to_camel, reverse_words |
| Collections | group_by, flatten, flatten_dict, chunk, deduplicate |
| Data parsing | csv_parse, json_parse, url_parse |
| Math | gcd, fibonacci, statistics, levenshtein |
| Algorithms | binary_search, run_length_encode, matrix_multiply |
| Other | is_palindrome, word_count, truncate, frequency_count, deep_get |
Plus 970 additional specs across 20 categories available in the full distribution.
Limitations
- Complex algorithms: SHA-256, recursive-descent parsers — too complex for reliable generation from a 9B model
- Apple Silicon only: Requires MLX (macOS with M-series chip)
- Cold start: First generation takes 3-100s (cached permanently after)
- Pure functions only: Stateful protocols, FFI, OS access are out of scope
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file conjure_llm-0.0.2.tar.gz.
File metadata
- Download URL: conjure_llm-0.0.2.tar.gz
- Upload date:
- Size: 41.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c0b3a831ecce5959ed149b6ba301004c029375c0a1c772aba9ecd5782a0b007f
|
|
| MD5 |
b16017fb876d32f6d3fa3b01e71591f1
|
|
| BLAKE2b-256 |
c8c80799e69ec2e60cec1b4dbeb888de86f906ffa1999e54a0b3aa0c9b744ab9
|
File details
Details for the file conjure_llm-0.0.2-py3-none-any.whl.
File metadata
- Download URL: conjure_llm-0.0.2-py3-none-any.whl
- Upload date:
- Size: 49.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1d8e5376dbdde40db28239f887e36ec2fa9e605bcd5fff2e5ac6fe59afc9a922
|
|
| MD5 |
d39e0b925ce0528528c189da22170727
|
|
| BLAKE2b-256 |
3c23f6d070c3fea3903d11a824c4370ab5e856597821231fda70dac0ef5e34c0
|