Skip to main content

The token saving proxy and context compression engine for AI coding agents. Reduce LLM API costs by 80% while providing full codebase context to Cursor, Claude Code, and Copilot.

Project description

🇨🇳 中文🇯🇵 日本語🇰🇷 한국어🇧🇷 Português🇪🇸 Español🇩🇪 Deutsch🇫🇷 Français🇷🇺 Русский🇮🇳 हिन्दी🇹🇷 Türkçe

Entroly

Token Savings: up to 95% Learning Cost: $0 Rust + WASM Python 3.10+ GitHub Action #1 on MCP Market

Entroly — Cut AI Token Costs by 70–95%

Your AI coding tools only see 5% of your codebase.
Entroly gives them the full picture — for a fraction of the cost.

npm install entroly-wasm && npx entroly-wasm  |  pip install entroly && entroly go  |  Live demo →


The Problem — and the Bottom-Line Impact

Every AI coding tool — Claude, Cursor, Copilot, Codex — has the same blind spot: it only sees 5–10 files at a time. The other 95% of your codebase is invisible. This causes hallucinated APIs, broken imports, missed dependencies, and wasted developer hours fixing AI-generated mistakes.

Models keep getting bigger — Claude Opus 4.7 just dropped with even more capability and even higher per-token costs. Larger context windows don't solve the problem; they make it worse. You're paying for 186,000 tokens per request — most of which is duplicated boilerplate.

Entroly fixes both problems in 30 seconds. It compresses your entire codebase into the AI context window at variable resolution, so your AI sees everything — and you pay for almost none of it.


What Changes on Day 1

Metric Before Entroly After Entroly
Files visible to AI 5–10 Your entire codebase
Tokens per request ~186,000 9,300 – 55,000
Monthly AI spend (at 1K req/day) ~$16,800 $840 – $5,040
AI answer accuracy Incomplete, often hallucinated Dependency-aware, correct
Developer time fixing AI mistakes Hours/week Near zero
Setup Days of prompt engineering 30 seconds

ROI example: A 10-person team spending $15K/month on AI API calls saves $10K–$14K/month on day 1. Entroly pays for itself in the first hour. (It's free and open-source, so it actually pays for itself instantly.)


What Your Competitors Already Know

The teams adopting Entroly today aren't just saving money — they're compounding an advantage your team can't catch up to.

  • Week 1: Their AI sees 100% of their codebase. Yours sees 5%. They ship faster.
  • Month 1: Their runtime has learned their codebase patterns. Yours is still hallucinating imports.
  • Month 3: Their installation is plugged into the federation — absorbing optimization strategies from thousands of other teams worldwide. Yours doesn't know this exists.
  • Month 6: They've saved $80K+ in API costs. That budget went into hiring. You're still explaining to finance why the AI bill keeps growing.

Every day you wait, the gap widens. The federation effect means early adopters get smarter faster — and that advantage compounds.


How It Works (30 Seconds)

npm install entroly-wasm && npx entroly-wasm
# or
pip install entroly && entroly go

That's it. Entroly auto-detects your IDE, connects to Claude/Cursor/Copilot/Codex/MiniMax, and starts optimizing. No configuration. No YAML. No embeddings.

What happens under the hood:

  1. Index — Maps your entire codebase in <2 seconds
  2. Score — Ranks every file by information density
  3. Select — Picks the mathematically optimal subset for your token budget
  4. Deliver — Critical files go in full, supporting files as signatures, everything else as references
  5. Learn — Tracks what works, gets smarter over time

Your AI now sees 100% of your codebase. You pay for 5–30% of the tokens.


The Competitive Edge — What Sets Entroly Apart

🧠 It Gets Smarter Without Costing You More

Most "self-improving" AI tools burn tokens to learn — your bill grows with their intelligence. Entroly's learning loop is provably token-negative: it cannot spend more on learning than it saves you.

The math is simple and auditable:

Learning budget ≤ 5% × Lifetime savings

Day 1: 70% token savings. Day 30: 85%+. Day 90: 90%+. The improvement costs you $0.

🌐 Federated Swarm Learning — The Part That Sounds Like Science Fiction

Now take the Dreaming Loop and multiply it by every developer on Earth who runs Entroly.

While you sleep, your daemon dreams — and so do 10,000 others. Each one discovers slightly different tricks for compressing code. Each one shares what it learned — anonymously, privately, no code ever leaves your machine. Each one absorbs what the others found.

You wake up. Your AI is smarter than when you left it. Not because of anything you did — because of what the swarm dreamed.

Your daemon dreams → discovers a better strategy → shares it (anonymously)
     ↓
10,000 other daemons did the same thing last night
     ↓
You open your laptop → your AI already absorbed all of it

Network effect:

  • Every new user makes everyone else's AI better — that installed base can't be forked
  • Your code never moves. Only optimization weights — noise-protected and anonymous
  • Infrastructure cost: $0. It runs on GitHub. No servers. No GPUs. No cloud
# Opt-in — your choice, always
export ENTROLY_FEDERATION=1

✂️ Response Distillation — Save Tokens on Output Too

LLM responses contain ~40% filler — "Sure, I'd be happy to help!", hedging, meta-commentary. Entroly strips it. Code blocks are never touched.

Before: "Sure! I'd be happy to help. Let me take a look at your code.
         The issue is in the auth module. Hope this helps!"

After:  "The issue is in the auth module."
         → 70% fewer output tokens

Three intensity levels: litefullultra. Enable with one env var.

🔒 Runs Locally. Your Code Never Leaves Your Machine.

Zero cloud dependencies. Zero data exfiltration risk. Everything runs on your CPU in <10ms. Works in air-gapped and regulated environments — nothing ever phones home.


Works With Your Stack

Tool Setup
Cursor entroly init → MCP server
Claude Code claude mcp add entroly -- entroly
GitHub Copilot entroly init → MCP server
Codex CLI entroly wrap codex
Windsurf / Cline / Cody entroly init
Any LLM API entroly proxy → HTTP proxy on localhost:9377

Also: OpenAI API • Anthropic API • LangChain • LlamaIndex • MCP-native


Benchmarks

Live Evolution Trace

This is from this repo's vault, not a roadmap:

[detect]     gap observed → entity="auth", miss_count=3
[synthesize] StructuralSynthesizer ($0, deterministic, no LLM)
[benchmark]  skill=ddb2e2969bb0 → fitness 1.0 (1 pass / 0 fail, 338 ms)
[promote]    status: draft → promoted
[spend]      $0.0000 — invariant C_spent ≤ τ·S(t) holds

Accuracy Retention

Compression doesn't hurt accuracy — we measured it (n=100, gpt-4o-mini, Wilson 95% CIs):

Benchmark Baseline (95% CI) With Entroly (95% CI) Retention
NeedleInAHaystack 100% [83.9–100%] 100% [83.9–100%] 100.0%
GSM8K 85.0% [76.7–90.7%] 86.0% [77.9–91.5%] 101.2%
SQuAD 2.0 84.0% [75.6–89.9%] 83.0% [74.5–89.1%] 98.8%

Confidence intervals overlap on every benchmark — accuracy is statistically indistinguishable from baseline. Reproduce: python -m bench.accuracy --benchmark all --model gpt-4o-mini --samples 100

CI/CD Integration

Run token cost checks in every PR — catch regressions before they ship:

- uses: juyterman1000/entroly-cost-check-@v1

entroly-cost-check GitHub Action


Watch It Run — Live Notifications

Three chat integrations ship in the box. See every gap detection, skill synthesis, and dream-cycle win in real-time:

export ENTROLY_TG_TOKEN=...          # Telegram (2-way: /status /skills /gaps /dream)
export ENTROLY_DISCORD_WEBHOOK=...   # Discord
export ENTROLY_SLACK_WEBHOOK=...     # Slack

Portable Skills (agentskills.io)

Skills Entroly creates aren't locked in. Export to the open agentskills.io v0.1 spec:

node node_modules/entroly-wasm/js/agentskills_export.js ./dist/agentskills
python -m entroly.integrations.agentskills ./dist/agentskills

Every exported skill carries origin.token_cost: 0.0 — the zero-cost provenance travels with it.


Full Parity: Python & Node.js

Both runtimes are feature-complete. Same engine, same vault, same learning loop:

Capability Python Node.js (WASM)
Context compression
Self-evolution
Dreaming loop
Federation
Response distillation
Chat gateways
agentskills.io export

Deep Dive

Architecture, 21 Rust modules, 3-resolution compression, provenance guarantees, RAG comparison, full CLI reference, Python SDK, LangChain integration → docs/DETAILS.md


Stop paying for tokens your AI wastes. Start running an AI that teaches itself.
npm install entroly-wasm && npx entroly-wasm  |  pip install entroly && entroly go

DiscussionsIssues • Apache-2.0 License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

entroly-0.8.5.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

entroly-0.8.5-py3-none-any.whl (345.4 kB view details)

Uploaded Python 3

File details

Details for the file entroly-0.8.5.tar.gz.

File metadata

  • Download URL: entroly-0.8.5.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for entroly-0.8.5.tar.gz
Algorithm Hash digest
SHA256 e44c116e7f6c83a61619930b9f9e1f667882b509d83160f3305fc17c94cb552d
MD5 65074484fc07bcd3b8bfeb7a37f034de
BLAKE2b-256 00572582f174ac6f36673d068528454c5ebe8eeda0d7ec750ca24cdfe069fce0

See more details on using hashes here.

File details

Details for the file entroly-0.8.5-py3-none-any.whl.

File metadata

  • Download URL: entroly-0.8.5-py3-none-any.whl
  • Upload date:
  • Size: 345.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for entroly-0.8.5-py3-none-any.whl
Algorithm Hash digest
SHA256 824cdd6b28bd50bfa698a49c477ff389fa9ad0251159ad6e90261e9e386cd6eb
MD5 33a3e469d6e3b4374133dd2e93873f0b
BLAKE2b-256 da10c57ed94d5fc0f8ced6e91dee4097698fd8a2fc535de44312def9904e5b19

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page