Skip to main content

Zero-dependency programming — replace library imports with LLM-generated, verified code

Project description

conjure

Zero-dependency programming — replace library imports with LLM-generated, verified code.

Every library you import is attack surface you didn't write. Conjure replaces thousands of transitive dependencies with one model binary and human-readable YAML specs.

Traditional app:
  App → pip install → 200 packages → 1500 transitive deps → 8M LOC of stranger code

Conjure app:
  App → .yaml spec files → embedded LLM → verified code → cached
  Trust chain: one model file (checksummed)

Install

pip install conjure

Quick Start

import conjure

# First call: generates, verifies, and caches (~10s)
result = conjure.invoke("base64_encode", data="hello")
print(result)  # "aGVsbG8="

# Second call: cache hit (0.3ms)
result = conjure.invoke("base64_encode", data="world")

How It Works

  1. Write a YAML spec describing what you need:
spec: edit_distance
version: 1.0.0
description: |
  Compute the Levenshtein edit distance between two strings.
function: levenshtein
input:
  s1: str
  s2: str
output: int
examples:
  - input: { s1: "kitten", s2: "sitting" }
    output: 3
  - input: { s1: "", s2: "hello" }
    output: 5
constraints:
  no_imports: true
  max_lines: 400
  1. Conjure generates a self-contained Python implementation (no imports), verifies it against your examples, and caches it:
result = conjure.invoke("levenshtein", s1="kitten", s2="sitting")
# Returns: 3

The generated code is:

  • Import-free — enforced by AST analysis, not heuristics
  • Verified — must pass all spec examples before caching
  • Sandboxed — runs with restricted builtins and timeouts
  • Cached — content-addressed by spec + model + seed (sub-ms on repeat calls)

Results

On ConjureEval-100 (100 specs across 20 categories):

Metric Rate
pass@1 (first attempt) 70.0%
pass@3 (best of 3 attempts) 87.9%

The retry pipeline recovers ~18% more specs by giving the model its own error messages.

Attack Surface Reduction

Across 5 real Python applications:

Application Dependencies Conjure Reduction
Flask blog 13 transitive 0 15× LOC
FastAPI service 15 transitive 0 17× LOC
CLI tool 5 transitive 0 LOC
Web scraper 17 transitive 0 20× LOC
File sync 8 transitive 0 LOC

Translation Feasibility

52% of a typical Flask application's imports can be automatically replaced by Conjure specs.

CLI

# Invoke a function
conjure invoke base64_encode -k data=hello

# Pre-generate all specs
conjure build --spec-dir specs/stdlib

# Evaluate specs
conjure eval --spec-dir specs/stdlib --output results.md

# Analyze a project for translation
conjure translate my_project/ --output conjure_specs/

# Compare attack surface
conjure audit --packages "flask,requests,pyyaml" --specs specs/stdlib

Model

Conjure uses Qwen3.5-9B-OptiQ-4bit via MLX on Apple Silicon:

  • 5 GB model memory (4-bit mixed-precision quantization)
  • Instruct mode with recommended sampling (temp=0.6, top_p=0.95)
  • Runs entirely on-device — no API keys, no cloud, no network

Included Specs

30 curated stdlib specs covering common library patterns:

Category Specs
Encoding base64_encode, base64_decode, hex_encode, hex_decode, rot13
String slugify, camel_to_snake, snake_to_camel, reverse_words
Collections group_by, flatten, flatten_dict, chunk, deduplicate
Data parsing csv_parse, json_parse, url_parse
Math gcd, fibonacci, statistics, levenshtein
Algorithms binary_search, run_length_encode, matrix_multiply
Other is_palindrome, word_count, truncate, frequency_count, deep_get

Plus 970 additional specs across 20 categories available in the full distribution.

Limitations

  • Complex algorithms: SHA-256, recursive-descent parsers — too complex for reliable generation from a 9B model
  • Apple Silicon only: Requires MLX (macOS with M-series chip)
  • Cold start: First generation takes 3-100s (cached permanently after)
  • Pure functions only: Stateful protocols, FFI, OS access are out of scope

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

conjure_llm-0.0.1.tar.gz (41.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

conjure_llm-0.0.1-py3-none-any.whl (49.8 kB view details)

Uploaded Python 3

File details

Details for the file conjure_llm-0.0.1.tar.gz.

File metadata

  • Download URL: conjure_llm-0.0.1.tar.gz
  • Upload date:
  • Size: 41.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for conjure_llm-0.0.1.tar.gz
Algorithm Hash digest
SHA256 6f6ee9b72aacb40848402299bea5c6f22b9e056108345fb958878a69f771ab3a
MD5 e31300a700d565158388d1efc081c9e2
BLAKE2b-256 69a75c3d6b80a609dd0c0cf351c9ce9fa1900f88ea94c0b854a9416ccc8e3893

See more details on using hashes here.

File details

Details for the file conjure_llm-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: conjure_llm-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 49.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for conjure_llm-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1b072d642d66f1787c1d43c97bffb867a36726360f978099f503a542eeb207c0
MD5 f800292fc8a40b7bc8d601d8ace299fe
BLAKE2b-256 7c92cf2adb78b5a0b6baf6e996ff7fdd1b11ca6e56745f9b432f6f51724eb1d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page