Skip to main content

The token saving proxy and context compression engine for AI coding agents. Reduce LLM API costs by 80% while providing full codebase context to Cursor, Claude Code, and Copilot.

Project description

Entroly

Entroly Daemon

Your AI is blind. Fix it in 30 seconds — then watch it teach itself.

Claude, Cursor, Copilot, Codex, and MiniMax only see 5% of your codebase. Entroly gives them a 2M-token brain for 90% less — a daemon that continuously self-evolves, compressing your context and dreaming up new skills with one obsession: saving more of your tokens and sharpening every answer. The first AI runtime whose learning is provably token-negative.

npm install entroly-wasm && npx entroly-wasm  |  Live demo →


What you actually get

Without Entroly With Entroly
Files the AI sees 5–10 Your entire repo
Tokens per request ~186,000 9,300 – 55,000
Cost per 1K requests ~$560 $28 – $168
Effective context window 200K ~2M (via variable-resolution compression)
Learning cost over time Grows (tokens) $0 — provably token-negative
Setup Hours of prompt hacks 30 seconds

Critical files go in full. Supporting files as signatures. The rest as references. Your AI gets the whole picture. You pay for almost none of it.


Live proof: the self-evolution loop just ran

Not a roadmap. This trace is from this repo's vault, right now:

[detect]     gap observed → entity="auth", miss_count=3
[synthesize] StructuralSynthesizer ($0, deterministic, no LLM)
[benchmark]  skill=ddb2e2969bb0 → fitness 1.0 (1 pass / 0 fail, 338 ms)
[promote]    status: draft → promoted
[registry]   .entroly/vault/evolution/registry.md updated
[spend]      $0.0000 — invariant C_spent ≤ τ·S(t) holds

Every other self-improving agent burns tokens to learn. Entroly's evolution ledger stays at $0 because the synthesizer reads your code graph, not an LLM.


Why it costs $0 to get smarter — the 3 Pillars

1. Token Economy — A ValueTracker measures lifetime savings S(t). The evolution budget is strictly capped:

C_spent(t)  ≤  τ · S(t)       (τ = 5%)

The runtime is mathematically incapable of costing more to improve than it saves.

2. Structural Induction ($0) — Before any token is touched, a deterministic synthesizer reads the AST, dependency edges, and entropy gradient of your code and emits a working tool. No LLM. No embeddings. No cloud.

3. Dreaming Loop — When idle for >60 s, the system generates synthetic queries, perturbs its scoring weights, and self-plays against benchmarks. Strict improvements are kept; regressions are discarded. You open your laptop in the morning to a smarter runtime.


Install

npm install entroly-wasm && npx entroly-wasm
# or
pip install entroly && entroly go

That's it. It detects your IDE, wires itself into Claude/Cursor/Copilot/Codex/MiniMax, and starts compressing. Both runtimes have full parity — budget invariant, agentskills.io export, the three chat gateways, and a shared on-disk vault so skills promoted by one runtime are visible to the other.

Node:

const { VaultObserver, TelegramGateway, ValueTracker, exportAgentSkills } = require('entroly-wasm');

const obs = new VaultObserver('.entroly/vault');
new TelegramGateway({ token, chatId }).attach(obs).start();

Python:

from entroly.evolution_daemon import EvolutionDaemon
from entroly.integrations.telegram_gateway import TelegramGateway

daemon.start()
TelegramGateway(token, chat_id).attach(daemon).start()

Watch the autonomy live

The daemon is useful silently — but seeing it move is what makes it real. Three chat gateways ship in the box — Telegram, Discord, Slack. Zero extra dependencies on either runtime.

# 1. Set one (or all three) of these
export ENTROLY_TG_TOKEN=...          # from @BotFather
export ENTROLY_TG_CHAT_ID=...
export ENTROLY_DISCORD_WEBHOOK=...   # Discord channel → Integrations → Webhooks
export ENTROLY_SLACK_WEBHOOK=...     # Slack app → Incoming Webhooks

Node (native fetch, no deps):

node node_modules/entroly-wasm/js/gateways.js

Python (stdlib urllib, no deps):

python -m entroly.integrations.telegram_gateway
python -m entroly.integrations.discord_gateway
python -m entroly.integrations.slack_gateway

Every gap detection, synthesis, promotion, and dream-cycle win streams to your chat. Telegram is 2-way — /status, /skills, /gaps, /dream.


Portable skills (agentskills.io)

Promoted skills aren't locked in Entroly. Export to the open agentskills.io v0.1 spec and any compatible runtime can consume them:

# Node
node node_modules/entroly-wasm/js/agentskills_export.js ./dist/agentskills

# Python
python -m entroly.integrations.agentskills ./dist/agentskills

Every exported skill carries origin.token_cost: 0.0 — the zero-token provenance is portable too.


Works with your stack

Claude Code • Cursor • Copilot • Codex CLI • MiniMax • Windsurf • Cody • OpenAI API • Anthropic API • LangChain • LlamaIndex • MCP-native


Deep dive

Architecture, benchmarks, PRISM RL internals, 3-resolution compression, provenance guarantees, RAG comparison, full API → docs/DETAILS.md


Stop paying for tokens your AI wastes. Start running an AI that teaches itself.
npm install entroly-wasm && npx entroly-wasm

DiscussionsIssues • MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

entroly-0.8.1.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

entroly-0.8.1-py3-none-any.whl (297.9 kB view details)

Uploaded Python 3

File details

Details for the file entroly-0.8.1.tar.gz.

File metadata

  • Download URL: entroly-0.8.1.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for entroly-0.8.1.tar.gz
Algorithm Hash digest
SHA256 870df706809d6d7c1f787ff6419617237cbb296d448b5a4de0588a042977aa9f
MD5 bd4cb863e4222b487bc62d8678dd8d73
BLAKE2b-256 7eef76f1e5b38b344ccc2cc7b7fea2ae07cf3613b450a97167fbba5a71b92b2d

See more details on using hashes here.

File details

Details for the file entroly-0.8.1-py3-none-any.whl.

File metadata

  • Download URL: entroly-0.8.1-py3-none-any.whl
  • Upload date:
  • Size: 297.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for entroly-0.8.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e273bdf51e922e2b1cdc04292e800276e56b1febd61c3177097dc88259ad2a86
MD5 87b9f7bd706d2d82335ad514861f3b07
BLAKE2b-256 69cf75af0f5e24f44006b0317f3b0bbce41cfd5f0de0a31cb3839ccfcb143803

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page