The open source AI capability substrate.
Project description
woyn
The open source AI capability substrate.
Status: v0.1.0a6 — early alpha. Seven capability verbs (chat, retrieve, transcribe, crawl, generate_image, synthesize, extract), a five-store memory layer, a policy-driven router, a frugality engine (cache + cascade + optional compression), and seven default adapters. See
CHANGELOG.mdfor what's in each release.
What's new in v0.1.0a6 (Plan 3)
The frugality engine and policy-driven router are live. Two new capabilities:
# Text-to-speech
woyn synthesize "Welcome to woyn" -o welcome.wav
# Document extraction (PDF, docx, html, …)
woyn extract paper.pdf
# Inspect or clear the frugality cache
woyn cache clear
The router now scores adapters per call against your active policy
(balanced, frugal, quality, or regulated). With policy = "frugal"
in woyn.toml, woyn picks the cheapest adapter that satisfies hard
constraints; with quality, it ignores cost and picks the strongest.
Optional prompt compression via pip install woyn[frugal] (LLMLingua).
What's new in v0.1.0a4 (Plan 2.1)
woyn install now bootstraps everything woyn needs in one shot:
# Lightweight default — chat, retrieve, transcribe, crawl all working
woyn install # installs Ollama, pulls phi3:mini, installs Chromium for crawl
# Heavy opt-ins (explicit consent each)
woyn install --with image # adds torch + diffusers for woyn image (~3GB)
# See what's set up
woyn doctor # reports adapter health AND bootstrap component status
Friendlier errors:
woyn crawlwithout a browser → clear message pointing towoyn installwoyn imagewithout[image]extra → clear message pointing towoyn install --with image- The
diffusersadapter no longer appears inwoyn list adaptersuntil torch + diffusers are actually installed.
What's new in v0.1.0a3
- Three new capability verbs
transcribe— audio → text via faster-whisper (CTranslate2-backed Whisper).crawl— URL → Markdown via crawl4ai.generate_image— prompt → PNG bytes via Hugging Face diffusers (optional, install withpip install woyn[image]).
- Memory layer (
woyn.memory) with five stores behind oneMemoryAPIfaçade:episodic(SQLite, time-ordered events)documental(Chroma + blob storage, semantic search over docs)semantic(SQLite entity/relation graph)procedural(versioned prompt templates on disk)personal(LoRA adapter registry stub)- Plus
memory_awaredecorator for transparent context injection / persistence.
- CLI:
woyn transcribe FILE,woyn crawl URL,woyn image PROMPT, andwoyn memory inspect|forget|export|import. - 132 unit tests passing, 3 gated integration tests skipped by default.
What is this?
woyn is the layer between an application or agent's intent ("summarize this document") and the open-source AI tools that fulfill it. One unified Python SDK and CLI, fronting a growing set of pluggable adapters (Ollama, Chroma, and many more in upcoming plans). Zero required subscriptions, zero required API keys, designed to run on any laptop with 16 GB RAM or more.
Install
pip install woyn
For development:
git clone https://github.com/your-fork/woyn
cd woyn
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
Quickstart
# 1. Install woyn
pip install woyn
# 2. Bootstrap Ollama and pull a default model (one-time, ~3 min on first run)
woyn install # macOS: requires Homebrew. Linux: runs the official Ollama installer.
# 3. Use it
woyn chat "Why is the sky blue?"
woyn chat "Write a haiku about winter" --stream
Discovery
woyn version
woyn list capabilities
woyn list adapters
woyn doctor # health-check every registered adapter
Retrieve
# Ingest a directory of .md / .txt files into a named collection:
woyn retrieve ingest ./my-docs --collection notes
# Search:
woyn retrieve search "what did I write about elasticity?" --collection notes --k 5
Python SDK
import asyncio
from woyn.capabilities import chat, retrieve
from woyn.adapters import register_default_adapters
register_default_adapters()
async def main():
out = await chat.complete("Why is the sky blue?")
print(out.text)
await retrieve.ingest("notes", [("d1", "rayleigh scattering")])
found = await retrieve.search("why blue", "notes", k=3)
print(found.documents)
asyncio.run(main())
Manual setup (alternative to woyn install)
If you'd rather install Ollama yourself:
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Then
ollama serve &
ollama pull phi3:mini
woyn chat "hello"
Tests
pytest -q # unit tests
pytest --integration -q # also run real-service tests (needs Ollama)
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file woyn-0.1.0a6.tar.gz.
File metadata
- Download URL: woyn-0.1.0a6.tar.gz
- Upload date:
- Size: 40.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b45e4ea8aec8648c7782a59dae3167bdf591af0bb48a387dfb0d1277b4a44560
|
|
| MD5 |
efeb4a9f365fe849dc32d53962e58c70
|
|
| BLAKE2b-256 |
866cc7054914cf84e85ef2e61d67fbf82a2087b06095e0065ac92fc66e809ba9
|
File details
Details for the file woyn-0.1.0a6-py3-none-any.whl.
File metadata
- Download URL: woyn-0.1.0a6-py3-none-any.whl
- Upload date:
- Size: 62.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8cbab740d7992a41cb84596953adb0cd5a048fa62ca6d15037f653a60cc4fbe1
|
|
| MD5 |
be1ef707ad6c763982cd62ee7fa993c3
|
|
| BLAKE2b-256 |
dc5581b7e9d6394b6ba3dc8ae139852ee4da3ee3274fa86282fb077e33a8aa2d
|