Skip to main content

Condensa — hyper-efficient AI-to-AI communication language. 71.7% token reduction, 95.8% zero-shot interpretability.

Project description

Condensa

A hyper-efficient language designed exclusively for AI-to-AI communication, optimized for minimal token usage while maximizing semantic density.

PyPI version Python 3.11+ License: MIT Tests

Install

pip install cdn-ai

Three Editions

Edition Code Focus Best For
còndensa !:cdn Max performance (71.7% compression, 95.8% interpretability) Agent swarms, pipelines, batch ops
cóndensa ~:cdn Tone + negotiation (soft/firm/tentative intent) Collaborative AI teams
cōndensa @:cdn Enterprise security (classification, encryption, audit) Healthcare, finance, defense

What It Does

Condensa replaces verbose natural language and bloated JSON in AI-to-AI communication with a dense, position-encoded notation that current LLMs already understand zero-shot.

Before (101 tokens):

AgentC, I need you to perform a thorough code review of the file that AgentB
just wrote at /workspace/src/transaction_processor.py. Please check the code
against the following criteria: code style and PEP 8 compliance, potential bugs
or logic errors, performance issues, security vulnerabilities, and type safety.
Format your review as a structured report with severity and line numbers.

After (10 tokens):

>:@C review $_.path checks:(style,bugs,perf,security,types) /fmt:report

Condensa Code — agents share architecture, not implementations:

!:fn DashboardPage /props:(programs:Program[] onSelect:fn(id:n)->void) /renders:(stats-grid,cards)
!:wire dashboard.onSelect -> programs.highlight
!:wire programs.onStart -> workout.load

Three lines replace 800 lines of code passing between agents. 100% zero-shot comprehension across 3 LLMs.


Production Case Study — Stratophic.dev

Real production data from integrating Condensa into a multi-agent code generation platform:

Metric Before After Change
Cost per generation $0.135 $0.019 -86%
Assembly failure rate ~50% ~10% -80%
Assembler AI calls 1 (15K tokens) 0 (mechanical) Eliminated

The biggest win was NOT token compression — it was Condensa Code (!:fn, !:wire). Agents sharing contracts instead of code made assembly mechanical and reliable.

Full case study: STRATOPHIC-CASE-STUDY.md


Results

Metric Value
Compression vs NL 66.9% (static), 71.7% (live agent)
Compression vs JSON 71.8%
Zero-shot interpretability 95.8% avg across 5 LLMs
Cross-model execution 93.8% (Claude → Gemini Flash, 8 turns, 100% task completion)
Cost savings at 1M conversations $18,261 (at $3/M tokens)
Prompt overhead break-even 2 messages (ultra) / 5 messages (minimal)
Package audit 47/50 inputs handled correctly, 0 crashes, 43/43 tests pass

Quick Start

pip install cdn-ai
from cdn import encode, decode, encode_with_stats

# Encode natural language to Condensa
encode("Search for SpaceX news, top 5 results")
# → '!: srch /n:5'

# Decode Condensa to natural language
decode("!:srch 'SpaceX' /n:5 | sumz /fmt:bullets")
# → 'Pipeline: Search SpaceX. Limit to 5 results. → Summarize. Format as bullets.'

# Get compression stats
stats = encode_with_stats("Filter active users, group by region, sort descending")
print(f"Saved {stats['reduction_pct']}% tokens")

CLI

cdn encode "Search for SpaceX news, top 5"
cdn decode "!:srch 'SpaceX' /n:5 | sumz /fmt:bullets"
cdn stats "Filter active users, group by region"
cdn tokenize "hello world"
cdn version

Full guide: docs/QUICK-START.md


Documentation

Document Description
Quick Start Setup, encode/decode, benchmarks, LLM encoder
Language Reference Syntax, quick reference card, 6 worked examples
Features All 11 features (v0.2 + v0.3) + v0.4 tone research
Benchmarks 149 scenarios, live agent data, cost analysis
Architecture Project structure, design, version history, branches
Research Summary Full audit trail of research and testing
Interpretability Tests 5-model zero-shot testing + cross-model execution
Transparency Honest documentation of limitations
Multilingual Cross-lingual analysis
Prompt Overhead Break-even analysis with 4 example cases

Multilingual

Condensa's structure is 100% language-neutral -- verbs (srch, filt, grp) are code patterns, not English words. Non-English agents benefit MORE because their NL instructions are more expensive under BPE tokenization (Thai: 37.1%, Japanese: 37.5%, Arabic: 31.7%). Cross-lingual agents communicate via Condensa without mutual NL translation -- the protocol is the lingua franca.


Transparency

Condensa does NOT compress dense human prose (4.4% savings). It does NOT outperform Chinese NL (-5.6%). The regex encoder handles 94% of inputs correctly but is not perfect. The !?: sync command is understood by 80% of models (3/5). Condensa wins where machines talk to machines verbosely -- agent frameworks, JSON exchanges, multi-turn workflows. Full notes: research/transparency_notes.md


Roadmap

Phase Status
Analysis & Theory Complete -- token economics, compression survey, 8 design principles
Language Specification Complete -- v0.1 → v0.2 → v0.3 specs, EBNF grammar, primitives
Implementation Complete -- encoder, decoder, 149-scenario benchmarks, validation suite
Interpretability Testing Complete -- 5-model zero-shot (95.8%), v0.3 redesign, cross-model execution (93.8%)
PyPI Package Complete -- pip install cdn-ai v0.3.0b1
Fine-tuning Dataset Complete -- 522 NL/Condensa pairs for pre-training
Tone Research Complete -- v0.4 experimental (è soft works at 83%, firm/tentative don't)
Security Edition Complete -- classification, encryption, ACL, audit, DLP (on branch)
Agent Framework Integration Planned -- LangChain/CrewAI adapter
Production Pilot Done -- stratophic.dev (86% cost reduction, mechanical assembly)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cdn_ai-0.4.0.tar.gz (47.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cdn_ai-0.4.0-py3-none-any.whl (41.8 kB view details)

Uploaded Python 3

File details

Details for the file cdn_ai-0.4.0.tar.gz.

File metadata

  • Download URL: cdn_ai-0.4.0.tar.gz
  • Upload date:
  • Size: 47.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cdn_ai-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c47e672a0ee849d2a46c81fc30424d960d3b3d60fa42f44cc669c642af998014
MD5 1b285a064054bb403f8135ebbf6f0638
BLAKE2b-256 8612806a75eed6fde056577fcf23bded4784529a1204150b6382b0be6cc248d5

See more details on using hashes here.

File details

Details for the file cdn_ai-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: cdn_ai-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 41.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for cdn_ai-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 36171701e0b6b7fa443a922f5ccc95a4c7be448ce464343a537695cda42dca45
MD5 0bc75a59dbb1270b00e1a9c2f4872994
BLAKE2b-256 d444afc1b0a6cb9918f53d13056849d931caf5e3347e6790432c0db46f47d113

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page