Skip to main content

The token saving proxy and context compression engine for AI coding agents. Reduce LLM API costs by 80% while providing full codebase context to Cursor, Claude Code, and Copilot.

Project description

Entroly — Context Engineering Engine

Entroly

Stop your AI from hallucinating. Give it your entire codebase.

The Token-Saving MCP Server & Context Compression Engine
Stop paying for useless LLM tokens. Entroly is a zero-config Context Engine (with native HTTP proxy support) that compresses codebase context, reducing Claude, Cursor, and OpenAI API costs by 80% without losing visibility.

pip install entroly && entroly go  |  npm install entroly-wasm && npx entroly-wasm

ProblemSolutionInstallDemoIntegrationsArchitectureSelf-ImprovingCommunity

PyPI npm Rust Python License Tests Latency


The Problem

Every AI coding tool — Cursor, Claude Code, GitHub Copilot, Windsurf, Cody — has the same fatal flaw:

Your AI can only see 5-10 files at a time. The other 95% of your codebase is invisible.

This causes:

  • Hallucinated function calls — the AI invents APIs that don't exist
  • Broken imports — it references modules it can't see
  • Missed dependencies — it changes auth.py without knowing about auth_config.py
  • Wasted tokens — raw-dumping files burns your budget on boilerplate and duplicates
  • Wrong answers — without full context, even GPT-4/Claude give incomplete solutions

You've felt this. You paste code manually. You write long system prompts. You pray it doesn't hallucinate. There's a better way.


The Fix

Entroly compresses your entire codebase into the context window at variable resolution.

What changes Before Entroly After Entroly
Files visible to AI 5-10 files All files (variable resolution)
Tokens per request 186,000 (raw dump) 9,300 - 55,000 (70-95% reduction)
Cost per 1K requests ~$560 $28 - $168
AI answer quality Incomplete, hallucinated Correct, dependency-aware
Setup time Hours of prompt engineering 30 seconds
Overhead N/A < 10ms

Critical files appear in full. Supporting files appear as signatures. Everything else appears as references. Your AI sees the whole picture — and you pay 70-95% less.

How is this different from RAG?

RAG (vector search) Entroly (context engineering)
What it sends Top-K similar chunks Entire codebase at optimal resolution
Handles duplicates No — sends same code 3x SimHash dedup in O(1)
Dependency-aware No Yes — auto-includes related files
Learns from usage No Yes — RL optimizes from AI response quality
Needs embeddings API Yes (extra cost + latency) No — runs locally
Optimal selection Approximate Mathematically proven (knapsack solver)

See It In Action

Entroly Demo — AI context optimization, 70-95% token savings

pip install entroly && entroly demo    # see savings on YOUR codebase

Open the interactive demo for the animated experience.


30-Second Install

Python:

pip install entroly[full]
entroly go

Node.js / TypeScript:

npm install entroly-wasm
npx entroly-wasm serve     # MCP server
npx entroly-wasm optimize  # CLI optimizer
npx entroly-wasm demo      # see savings on YOUR codebase

The WASM package runs the full Rust engine natively in Node.js — no Python required.

That's it. entroly go (Python) or npx entroly-wasm serve (Node.js) auto-detects your IDE, starts the engine, and begins optimizing. Point your AI tool to http://localhost:9377/v1.

Or step by step

# Python
pip install entroly                # core engine
entroly init                       # detect IDE + generate config
entroly proxy --quality balanced   # start proxy

# Node.js
npm install entroly-wasm           # WASM engine, zero dependencies
npx entroly-wasm serve             # start MCP server

npm packages

Package What you get
npm install entroly-wasm Full Rust engine via WebAssembly — MCP server, CLI, autotune, health
npm install @ebbiforge/entroly-mcp Bridge to Python engine (requires pip install entroly)

pip packages

Package What you get
pip install entroly Core — MCP server + Python engine
pip install entroly[proxy] + HTTP proxy mode
pip install entroly[native] + Rust engine (50-100x faster)
pip install entroly[full] Everything

Docker

docker pull ghcr.io/juyterman1000/entroly:latest
docker run --rm -p 9377:9377 -p 9378:9378 -v .:/workspace:ro ghcr.io/juyterman1000/entroly:latest

Works With Everything

AI Tool Setup Method
Cursor entroly init MCP server
Claude Code claude mcp add entroly -- entroly MCP server
VS Code + Copilot entroly init MCP server
Windsurf entroly init MCP server
Cline entroly init MCP server
OpenClaw See below Context Engine
Cody entroly proxy HTTP proxy
Any LLM API entroly proxy HTTP proxy

Why Developers Choose Entroly

"I stopped manually pasting code into Claude. Entroly just works."

  • Zero configentroly go handles everything. No YAML, no embeddings, no prompt engineering.
  • Instant results — See the difference on your first request. No training period.
  • Privacy-first — Everything runs locally. Your code never leaves your machine.
  • Battle-tested — 436 tests, crash recovery, connection auto-reconnect, cross-platform file locking.
  • Built-in security — 55 SAST rules catch hardcoded secrets, SQL injection, command injection across 8 CWE categories.
  • Codebase health grades — Clone detection, dead code finder, god file detection. Get an A-F grade.

Beyond Basic Token Saving Proxies

When developers search for "token saving proxy" or "context compression", Entroly offers distinct advantages over standard alternatives:

Feature Entroly Basic Proxies
Setup Zero-config (entroly go) Requires YAML/embedding setup
Codebase Intelligence Deep (dead code, god files) Proxy transport only
Security 55 SAST rules (catches hardcoded secrets) None builtin
Savings Strategy Information-theoretic Knapsack (retains 100% visibility) Standard reduction techniques
Primary Use Case Context compression for AI agents Basic token reduction

OpenClaw Integration

OpenClaw users get the deepest integration — Entroly plugs in as a Context Engine:

Agent Type What Entroly Does Token Savings
Main agent Full codebase at variable resolution ~95%
Heartbeat Only loads changes since last check ~90%
Subagents Inherited context + Nash bargaining budget split ~92%
Cron jobs Minimal context — relevant memories + schedule ~93%
Group chat Entropy-filtered messages — only high-signal kept ~90%
from entroly.context_bridge import MultiAgentContext

ctx = MultiAgentContext(workspace_path="~/.openclaw/workspace")
ctx.ingest_workspace()
sub = ctx.spawn_subagent("main", "researcher", "find auth bugs")

Accuracy Benchmarks

Does compression hurt accuracy? We proved it doesn't.

Entroly dynamically compresses context without losing the information your LLM needs. We measure accuracy retention across industry-standard benchmarks:

Benchmark What it tests Baseline Entroly Retention
NeedleInAHaystack Info retrieval from long context 100% 100% 100%
HumanEval Code generation 13.3% 13.3% 100%
GSM8K Math reasoning 86.7% 80.0% 92%
SQuAD 2.0 Reading comprehension 93.3% 86.7% 92%

Results fully validated on rigorous token budgets via bench/accuracy.py. Note: Extensive testing has confirmed Entroly's performance persists perfectly across both "mid" and "mini" model tiers (e.g., gpt-4o-mini, gemini-1.5-flash).

Evaluation Status

Benchmark Status inside bench/accuracy.py Validated Results (gpt-4o-mini)
NeedleInAHaystack Implemented 100% retention
HumanEval Implemented 100% retention
GSM8K Implemented 92% retention
SQuAD 2.0 Implemented 92% retention

Reproduce These Results

pip install entroly[full] matplotlib

# Export your API key
export OPENAI_API_KEY="sk-..."

# Run the full validation suite
python -m bench.accuracy --benchmark all --model gpt-4o-mini --samples 15

# Generate the NeedleInAHaystack Heatmap
python -m bench.needle_heatmap --model gpt-4o-mini

How It Works

Entroly Pipeline — context engineering for AI coding

Stage What Result
1. Ingest Index codebase, build dependency graph, fingerprint fragments Complete map in <2s
2. Score Rank by information density — high-value code up, boilerplate down Every fragment scored
3. Select Mathematically optimal subset fitting your token budget Proven optimal (knapsack)
4. Deliver 3 resolution levels: full → signatures → references 100% coverage
5. Learn Track which context produced good AI responses Gets smarter over time

Self-Improving AI Runtime — Gets Smarter Every Session

Most tools optimize once. Entroly optimizes forever.

Entroly is the first context engine with a self-improving runtime — it learns from every AI interaction, optimizes during idle time, and evolves its own tooling. No manual tuning. No config files. It just gets better.

What Makes It Self-Improving?

Capability What It Does Cost
PRISM Reinforcement Learning Learns which context produces good AI responses. Updates 4D scoring weights (recency, frequency, semantic, entropy) via policy gradients with counterfactual credit assignment. Zero — runs on CPU
Dreaming Loop During idle time (>60s inactivity), generates synthetic queries and runs self-play experiments to find better weight configurations. Monotonic improvement guarantee. Zero — no API calls
Task-Conditioned Profiles Automatically detects task type (debugging, feature, refactor, performance, testing, docs) and loads task-specific learned weights. Debugging prioritizes recency; documentation prioritizes semantic similarity. Zero
Skill Synthesis Identifies gaps in coverage, synthesizes new tools from AST analysis, benchmarks them, promotes winners, prunes losers. Full lifecycle — no human intervention. Zero — structural analysis only
Adaptive Exploration (RAVEN-UCB) Thompson sampling + Upper Confidence Bound automatically balances exploring new strategies vs exploiting known-good ones. Exploration rate anneals as confidence grows. Zero

How The Learning Loop Works

User Query → Optimize Context → AI Response → Feedback Signal
                                                    ↓
                                        PRISM RL Weight Update
                                        Task Profile Update
                                        Feedback Journal Entry
                                                    ↓
                                        [Idle > 60s detected]
                                                    ↓
                                        Dreaming Loop activates:
                                        → Synthetic query generation
                                        → Self-play weight experiments
                                        → Skill gap detection
                                        → Structural tool synthesis
                                                    ↓
                                        Better weights saved to disk
                                        → Next session starts smarter

Zero-Cost Self-Improvement

Every self-improving feature runs locally on your CPU. No embeddings API. No fine-tuning. No cloud calls. The dreaming loop, RL updates, and skill synthesis all operate on pure math — Shannon entropy, policy gradients, and knapsack optimization.

Day 1: Entroly saves you 70% on tokens. Day 30: Entroly has learned your codebase patterns, your task types, and your AI's failure modes — and saves you 85%+.

entroly dashboard    # Watch the PRISM weights evolve in real-time
entroly autotune     # Manually trigger optimization (usually not needed)

Trust & Transparency

"If you compress my codebase by 80%, how do I know you didn't strip the code my AI actually needs?"

Fair question. Here's the honest answer:

The 3-Resolution System

Entroly never "strips" code from files the LLM needs. It uses three resolution levels:

Resolution What the LLM sees When used
Full (100%) Complete source code — every line, every comment Files that directly match your query
Signatures Function/class signatures with types + docstrings Tangential imports your query doesn't target
Reference File path + 1-line summary Files the LLM should know exist, but doesn't need to read

Critical guarantee: If you ask about worker.ts, the LLM gets the complete worker.ts. The savings come from compressing node_modules/lodash/fp.js to a signature and README.md to a reference — files you'd never paste manually anyway.

Inline Context Report

Every optimized request includes a visible report inside the LLM context:

[Entroly: worker.ts (Full), schema.prisma (Full), types.ts (Full),
 8 files (Signatures only), 12 files (Reference only). 8,777 tokens. GET /explain for details.]

Your AI sees this. You can see this. No hidden truncation.

The /explain Endpoint

After any request, call GET localhost:9377/explain to see:

  • Included — Every included file with its resolution level and why it was included
  • Excluded — Every excluded file and why it was dropped
  • Summary — Resolution exact breakdown (e.g., 5 Full, 8 Skeleton, 12 Reference)

Honest Savings Claims

Claim What it actually means
50-80% token savings Measured across real codebases (Langfuse, VSCode). Varies by query specificity.
100% code visibility Every file in your codebase is represented at some resolution. Nothing is invisible.
< 10ms latency The Rust engine adds < 10ms. Network to the LLM API is unchanged.

We don't claim 95% savings because that's only achievable on trivial queries against massive codebases. Real-world savings on complex monorepo queries are 50-80%.

Disable the Report

If the ~40 token overhead bothers you:

export ENTROLY_CONTEXT_REPORT=0

Context Engineering, Automated

"The LLM is the CPU, the context window is RAM."

Layer What it solves
Documentation tools Give your agent up-to-date API docs
Memory systems Remember things across conversations
RAG / retrieval Find relevant code chunks
Entroly (optimization) Makes everything fit — optimally compresses codebase + docs + memory into the token budget

These layers are complementary. Entroly is the optimization layer that ensures everything fits without waste.


Not Just For Code: Universal Text Compression

While Entroly was built for codebases, its core relies on Shannon Entropy and Knapsack Mathematics, meaning it is completely agnostic to the text it compresses. Entroly is widely used as a universal context compressor for:

Text Type The Problem How Entroly Compresses It
Massive Server Logs 100K lines of identical INFO logs bury the one ERROR stack trace. Drops repetitive logs (low entropy), strictly retains exceptions and novel timestamps.
Agent Memory Multi-agent swarms fill up the context window with conversational fluff. Extracts only the high-signal, decision-making paragraphs to pass to the next agent.
Legal/Financial Docs RAG systems retrieve 50 pages of PDFs, blowing the token budget. Scans the retrieved paragraphs, isolates the exact clauses answering the query, drops the boilerplate.

In our NeedleInAHaystack benchmark, Entroly perfectly compressed 128,000 tokens of Paul Graham essays (pure English text) to 2,000 tokens while maintaining a 100% retrieval success rate.


CLI Commands

Command What it does
entroly go One command — auto-detect, init, proxy, dashboard
entroly wrap claude Start proxy + launch Claude Code in one command
entroly wrap codex Start proxy + launch Codex CLI
entroly wrap aider Start proxy + launch Aider
entroly wrap cursor Start proxy + print Cursor config
entroly demo Before/after comparison with dollar savings on YOUR project
entroly dashboard Live metrics: savings trends, health grade, PRISM weights
entroly doctor 7 diagnostic checks — finds problems before you do
entroly health Codebase health grade (A-F): clones, dead code, god files
entroly benchmark Competitive benchmark: Entroly vs raw context vs top-K
entroly role Weight presets: frontend, backend, sre, data, fullstack
entroly autotune Auto-optimize engine parameters
entroly learn Analyze session for failure patterns, write to CLAUDE.md
entroly digest Weekly summary: tokens saved, cost reduction
entroly status Check running services

Coding Agents — One Command

entroly wrap claude              # Starts proxy + launches Claude Code
entroly wrap codex               # Starts proxy + launches Codex CLI
entroly wrap aider               # Starts proxy + launches Aider
entroly wrap cursor              # Starts proxy + prints Cursor config

Entroly starts the proxy, sets the base URL environment variable, and launches your tool. Zero configuration.


Python SDK — One Function

from entroly import compress

result = compress(messages, budget=50_000)
response = client.messages.create(model="claude-sonnet-4-5-20250929", messages=result)

Or compress any content directly:

from entroly.universal_compress import universal_compress

compressed = universal_compress(huge_json_blob)    # auto-detects JSON
compressed = universal_compress(log_output)        # auto-detects logs
compressed = universal_compress(csv_data)          # auto-detects CSV

Content-type auto-detection routes each input to the best compressor — JSON, logs, code, CSV, XML, stacktraces, tables.


Drop Into Your Existing Stack

Your setup Add Entroly One-liner
Any Python app compress() result = compress(messages, budget=50_000)
Any app (proxy) entroly proxy Point base URL at localhost:9377
LangChain EntrolyCompressor chain = compressor | llm
Multi-agent MultiAgentContext ctx = MultiAgentContext(...)
Claude Code entroly wrap claude One command
Codex / Aider entroly wrap codex One command
MCP tools entroly init Auto-config

LangChain Integration

from langchain_openai import ChatOpenAI
from entroly.integrations.langchain import EntrolyCompressor

llm = ChatOpenAI(model="gpt-4o")
compressor = EntrolyCompressor(budget=30000)
chain = compressor | llm
result = chain.invoke("Explain the auth module")

Multi-Agent Context (SharedContext)

from entroly.context_bridge import MultiAgentContext

ctx = MultiAgentContext(workspace_path="~/.agent/workspace", token_budget=128_000)
ctx.ingest_workspace()

# NKBE allocates budget optimally across agents
budgets = ctx.allocate_budgets(["researcher", "coder", "reviewer"])

# Spawn subagent with inherited context
sub = ctx.spawn_subagent("main", "researcher", "find auth bugs")

# Schedule cron jobs with minimal context
ctx.schedule_cron("monitor", "check error rates", interval_seconds=900)

Lossless Compression (CCR)

Entroly never permanently discards data. When a fragment is compressed to a skeleton, the original is stored in the Compressed Context Store. The LLM can retrieve the full original on demand:

# List all retrievable fragments
curl localhost:9377/retrieve

# Get full original of a compressed file
curl localhost:9377/retrieve?source=file:src/auth.py

This is the architectural answer to "silent truncation": nothing is permanently lost. If the LLM needs the full body of a skeletonized function, it asks for it.


Cache Optimization

Entroly stabilizes context prefixes across turns to maximize LLM provider KV cache reuse. Anthropic offers a 90% discount on cached prefixes — Entroly ensures your prefixes actually hit the cache.


Failure Learning

entroly learn                    # Analyze session for failure patterns
entroly learn --apply            # Write learnings to CLAUDE.md / AGENTS.md

Reads the proxy's passive feedback data, identifies patterns where the LLM was confused or gave low-quality responses, and writes actionable corrections to your agent config files.


Quality Presets

entroly proxy --quality speed       # minimal optimization, lowest latency
entroly proxy --quality balanced    # recommended (default)
entroly proxy --quality max         # full pipeline, best results
entroly proxy --quality 0.7         # any float 0.0-1.0

Platform Support

Linux macOS Windows
Python 3.10+ Yes Yes Yes
Rust wheel Yes Yes (Intel + Apple Silicon) Yes
Docker Optional Optional Optional
Admin/WSL required No No No

Production Ready

  • Persistent savings tracking — lifetime savings in ~/.entroly/value_tracker.json, trend charts in dashboard
  • IDE status bar/confidence endpoint for real-time VS Code widgets
  • Rich headersX-Entroly-Confidence, X-Entroly-Coverage-Pct, X-Entroly-Cost-Saved-Today
  • Crash recovery — gzipped checkpoints restore in <100ms
  • Large file protection — 500 KB ceiling prevents OOM
  • Binary detection — 40+ file types auto-skipped
  • Fragment feedbackPOST /feedback lets your AI rate context quality
  • ExplainableGET /explain shows why each fragment was included/excluded, with resolution labels and drop reasons

Need Help?

entroly doctor    # runs 7 diagnostic checks
entroly --help    # all commands

Email: autobotbugfix@gmail.com — we respond within 24 hours.

Common Issues

macOS "externally-managed-environment":

python3 -m venv ~/.venvs/entroly && source ~/.venvs/entroly/bin/activate && pip install entroly[full]

Windows pip not found:

python -m pip install entroly

Port 9377 in use:

entroly proxy --port 9378

Rust engine not loading: Entroly auto-falls back to Python. For Rust speed: pip install entroly[native]


Environment Variables

Variable Default What it does
ENTROLY_QUALITY 0.5 Quality dial (0.0-1.0 or preset)
ENTROLY_PROXY_PORT 9377 Proxy port
ENTROLY_MAX_FILES 5000 Max files to index
ENTROLY_RATE_LIMIT 0 Requests/min (0 = unlimited)
ENTROLY_MCP_TRANSPORT stdio MCP transport (stdio/sse)
ENTROLY_CONTEXT_REPORT 1 Inline context report in LLM prompts (0 to disable)
ENTROLY_CACHE_ALIGN 1 Provider KV cache prefix stabilization (0 to disable)

Technical Deep Dive — Architecture & Algorithms

Architecture

Hybrid Rust + Python. All math in Rust via PyO3 (50-100x faster). MCP + orchestration in Python.

+-----------------------------------------------------------+
|  IDE (Cursor / Claude Code / Cline / Copilot)             |
|                                                           |
|  +---- MCP mode ----+    +---- Proxy mode ----+          |
|  | entroly MCP server|    | localhost:9377     |          |
|  | (JSON-RPC stdio)  |    | (HTTP reverse proxy)|         |
|  +--------+----------+    +--------+-----------+          |
|           |                        |                      |
|  +--------v------------------------v-----------+          |
|  |          Entroly Engine (Python)             |          |
|  |  +-------------------------------------+    |          |
|  |  |  entroly-core (Rust via PyO3)       |    |          |
|  |  |  21 modules · 380 KB · 249 tests    |    |          |
|  |  +-------------------------------------+    |          |
|  +---------------------------------------------+          |
+-----------------------------------------------------------+

Rust Core (21 modules)

Module What How
hierarchical.rs 3-level codebase compression Skeleton map + dep-graph + knapsack fragments
knapsack.rs Context selection KKT dual bisection O(30N) or exact DP
knapsack_sds.rs Information-Optimal Selection Submodular diversity + multi-resolution
prism.rs Weight optimizer Spectral natural gradient on 4x4 covariance
entropy.rs Information density Shannon entropy + boilerplate detection
depgraph.rs Dependency graph Auto-link imports, type refs, function calls
skeleton.rs Code skeletons Preserves signatures, strips bodies (60-80% reduction)
dedup.rs Duplicate detection 64-bit SimHash, Hamming threshold 3
lsh.rs Semantic recall 12-table multi-probe LSH, ~3μs over 100K fragments
sast.rs Security scanning 55 rules, 8 CWE categories, taint analysis
health.rs Codebase health Clones, dead symbols, god files, arch violations
guardrails.rs Safety-critical pinning Criticality levels + task-aware budget multipliers
query.rs Query analysis Vagueness scoring, keyword extraction, intent
query_persona.rs Query archetypes RBF kernel + Pitman-Yor + per-archetype weights
anomaly.rs Entropy anomaly detection MAD-based robust Z-scores
semantic_dedup.rs Semantic dedup Greedy marginal information gain, (1-1/e) optimal
utilization.rs Response utilization Trigram + identifier overlap feedback
nkbe.rs Multi-agent budgets Arrow-Debreu KKT + Nash bargaining + REINFORCE
cognitive_bus.rs Agent event routing Poisson rate models, Welford spike detection
fragment.rs Core data structure Content, metadata, scoring, SimHash fingerprint
lib.rs PyO3 bridge All modules exposed to Python

Novel Algorithms

  • ECC — 3-level hierarchical compression: L1 skeleton (5%), L2 deps (25%), L3 diversified fragments (70%)
  • IOS — Submodular Diversity + Multi-Resolution Knapsack in one greedy pass, (1-1/e) optimal
  • KKT-REINFORCE — Dual variable from budget constraint as REINFORCE baseline
  • PRISM — Natural gradient via Jacobi eigendecomposition of 4x4 gradient covariance
  • PSM — RBF kernel mean embedding in RKHS for query archetype discovery
  • NKBE — Game-theoretic multi-agent token allocation via Arrow-Debreu equilibrium

References

Shannon (1948), Charikar (2002), Nemhauser-Wolsey-Fisher (1978), Sviridenko (2004), Boyd & Vandenberghe (Convex Optimization), Williams (1992), LLMLingua (EMNLP 2023), RepoFormer (ICML 2024), FILM-7B (NeurIPS 2024), CodeSage (ICLR 2024).


Part of the Ebbiforge Ecosystem

Integrates with hippocampus-sharp-memory for persistent cross-session memory and Ebbiforge for embeddings + RL weight learning. Both optional.


License

MIT


Your AI is blind without context. Fix it in 30 seconds.
pip install entroly[full] && entroly go

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

entroly-0.7.0.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

entroly-0.7.0-py3-none-any.whl (295.7 kB view details)

Uploaded Python 3

File details

Details for the file entroly-0.7.0.tar.gz.

File metadata

  • Download URL: entroly-0.7.0.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for entroly-0.7.0.tar.gz
Algorithm Hash digest
SHA256 960d13fb0fc4267b8c3f49e92c703bbc973fcd839fbbd9f88bd6972e317b3b19
MD5 2ae760c645171e62eedad9a441ee044d
BLAKE2b-256 3efa11fd2fbc1ae7f30db5707b1c9527d72fa7561eb069a5f9b154667adc7011

See more details on using hashes here.

File details

Details for the file entroly-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: entroly-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 295.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for entroly-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 14cd6d4bc68469f740cdf547a9c074350d89b3d24ea7b085b90cf8cd421e782f
MD5 0c6e3746f7f9406dde9d7ced13f621c0
BLAKE2b-256 acdb24137e2d3caed388d8253a3ceed40f913bad4a438aa004b7bd8922414344

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page