Skip to main content

Condensa — hyper-efficient AI-to-AI communication language. 71.7% token reduction, 95.8% zero-shot interpretability.

Project description

Condensa

A hyper-efficient language designed exclusively for AI-to-AI communication, optimized for minimal token usage while maximizing semantic density.

Three Editions

Edition Code Focus Best For
condensa (core) !:cdn Max performance (71.7% compression, 95.8% interpretability) Agent swarms, pipelines, batch ops
condensa (expressive) ~:cdn Tone + negotiation (soft/firm/tentative intent) Collaborative AI teams
condensa (secure) @:cdn Enterprise security (classification, encryption, audit) Healthcare, finance, defense

What It Does

Condensa replaces verbose natural language and bloated JSON in AI-to-AI communication with a dense, position-encoded notation that current LLMs already understand zero-shot. Multi-turn agent conversations waste 50-80% of tokens on structural overhead, context re-transmission, and politeness filler. Condensa eliminates all three, achieving 71.7% token reduction in live agent benchmarks across 149 tested scenarios.

Before (101 tokens):

AgentC, I need you to perform a thorough code review of the file that AgentB
just wrote at /workspace/src/transaction_processor.py. Please check the code
against the following criteria: code style and PEP 8 compliance, potential bugs
or logic errors, performance issues, security vulnerabilities, and type safety.
Format your review as a structured report with severity and line numbers.

After (10 tokens):

>:@C review $_.path checks:(style,bugs,perf,security,types) /fmt:report

Results

Metric Value
Compression vs NL 66.9% (static), 71.7% (live agent)
Compression vs JSON 71.8%
Zero-shot interpretability 95.8% avg across 5 LLMs
Cross-model execution 93.8% (Claude to Gemini Flash, 8 turns)
Cost savings at 1M conversations $18,261 (at GPT-4o pricing)
Prompt overhead break-even 2 messages (ultra) / 5 messages (minimal)

Quick Start

python3 -m venv .venv && source .venv/bin/activate
pip install tiktoken pyyaml
python -c "from src.encoder import encode; print(encode('Search the web for SpaceX news and return the top 5 results'))"
# Output: !:srch SpaceX /n:5

Full setup, benchmarks, and LLM encoder usage: Quick Start Guide


Documentation

Document Description
Quick Start Setup, encode/decode, run benchmarks, LLM encoder
Language Reference Syntax, quick reference card, 6 worked examples
Features All 11 features (v0.2 + v0.3) + v0.4 tone research
Benchmarks 149 scenarios, live agent data, cost analysis
Architecture Project structure, design, version history, branches
Research Summary Full audit trail
Interpretability Tests 5-model zero-shot testing
Transparency Honest limitations
Multilingual Cross-lingual analysis
Prompt Overhead Break-even analysis

Multilingual

Condensa's structure is 100% language-neutral -- verbs (srch, filt, grp) are code patterns, not English words. Non-English agents benefit MORE because their NL instructions are more expensive under BPE tokenization (Thai: 37.1% savings, Japanese: 37.5%, Arabic: 31.7%). A Japanese agent and Chinese agent can collaborate without understanding each other's language -- Condensa serves as the lingua franca.

Full analysis: research/multilingual_analysis.md


Transparency

Honest documentation of where Condensa does NOT work well. Dense human prose saves only 4.4% (already near information-theoretic minimum). Chinese NL is -5.6% (worse) because Chinese is already extremely dense. The regex encoder has 35% fidelity (prototype only; the LLM encoder achieves 82%). Condensa wins where machines talk to machines verbosely -- agent frameworks, JSON exchanges, multi-turn workflows.

Full notes: research/transparency_notes.md


Roadmap

Phase Status Description
Phase 1: Analysis & Theory Complete Token economics audit, compression survey, 8 design principles
Phase 2: Language Specification Complete v0.1, v0.2, v0.3 specs, EBNF grammar, primitives registry
Phase 3: Implementation Complete Encoder, decoder, 149-scenario benchmarks, validation suite
Phase 3.5: Interpretability Testing Complete 5-model zero-shot test (95.8%), v0.3 token redesign
Phase 3.6: Cross-Model Execution Complete Claude to Gemini Flash, 100% task execution, 93.8% overall
Phase 4: Optimization In progress LLM encoder (done), fine-tuning dataset (done), agent framework integration (planned), pip install condensa (planned)

License

Research project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cdn_ai-0.3.0b1.tar.gz (41.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cdn_ai-0.3.0b1-py3-none-any.whl (40.0 kB view details)

Uploaded Python 3

File details

Details for the file cdn_ai-0.3.0b1.tar.gz.

File metadata

  • Download URL: cdn_ai-0.3.0b1.tar.gz
  • Upload date:
  • Size: 41.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cdn_ai-0.3.0b1.tar.gz
Algorithm Hash digest
SHA256 5c2d7137ba15176f8d0815fd18067885cf67210901fa3387421e4181077b2e43
MD5 cb9f57f7c58b0c3a219263dd4963448c
BLAKE2b-256 9d686312ae39ed647e8abdf898ca373b9cbe6a211b637b6e1094f98883e77f41

See more details on using hashes here.

File details

Details for the file cdn_ai-0.3.0b1-py3-none-any.whl.

File metadata

  • Download URL: cdn_ai-0.3.0b1-py3-none-any.whl
  • Upload date:
  • Size: 40.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cdn_ai-0.3.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 bf254afd89170110988f48fe8e1cb993776aaaf7916ac6e8ac2b95650064c901
MD5 4112be77db7ffc8bd54d4aeee07b2de4
BLAKE2b-256 6cd7b01f965ebf53d42e32df22d622cbcfc64acaf0f80f9fb59e56c9b63375c7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page