Skip to main content

High-performance multi-agent AI framework — 100M agent scale, selective LLM reasoning

Project description

Ebbiforge

Ebbiforge

The high-performance multi-agent framework.
100M agents in Rust. Selective LLM reasoning. Enterprise compliance built-in.

CI License: MIT Rust Python

Ebbiforge — Multi-agent swarm reacting to live data


Why Ebbiforge?

LangChain CrewAI Ebbiforge
Agent scale ~50 ~10 100,000,000
Cost per step $0.01+ $0.01+ $0.00 (Rust)
Compliance PII + GDPR + Audit
Belief provenance Full chain
World model Learned dynamics
Evolution Darwinian genetics

The key idea: 99.99% of agents run in Rust at zero cost. Only when the swarm detects something interesting does an LLM get called. This is selective reasoning — the LLM is a consultant, not a worker.


Installation

From PyPI

pip install ebbiforge

From Source (includes Rust engine)

git clone https://github.com/juyterman1000/ebbiforge.git
cd ebbiforge && pip install maturin && maturin develop --release

Requirements: Python 3.9+, Rust 1.75+ (for source builds)


Quick Start

Hello Swarm (5 lines)

from ebbiforge import Agent, Swarm, Task

swarm = Swarm()
swarm.add(Agent(name="Researcher"))
swarm.add(Agent(name="Analyst"))
result = swarm.run(Task("Analyze emerging AI trends", pipeline=True))

# Every claim tracked with source attribution
provenance = swarm.get_belief_provenance(result)
if provenance.has_unverified:
    print("⚠️ Pipeline contains unverified claims!")

50K Agents with Selective Reasoning

import ebbiforge_core as ebbi

swarm = ebbi.TensorSwarm(agent_count=50_000,
    world_config=ebbi.WorldModelConfig(ebbinghaus_decay_rate=0.1, grid_size=(200, 200)))

for tick in range(1000):
    swarm.tick()
    scores = swarm.surprise_scores  # Per-agent surprise (0.0–1.0)
    surprised = sum(1 for s in scores if s > 0.3)
    if surprised / 50_000 > 0.05:  # 5% threshold
        print(f"🧠 Tick {tick}: Anomaly! Call LLM for {surprised} agents")
    # The other 995+ ticks? FREE. No LLM. Pure Rust.

Connect Any Data Source

from ebbiforge.connectors import HTTPPoller, Signal

def parse(data: dict) -> list[Signal]:
    return [Signal(source="my-api", value=data["metric"] / 100.0)]

feed = HTTPPoller(url="https://my-api.com/data", transform=parse)

Enterprise Compliance

import ebbiforge_core as ebbi

compliance = ebbi.ComplianceEngine()
result = compliance.check_action("agent-1", "send_email", "SSN: 123-45-6789")
# → BLOCKED: PII detected

World Model Training

Ebbiforge includes a learned dynamics model that predicts agent state transitions. The model learns causal structure from logged trajectories — no pre-trained weights needed.

Collect Trajectories & Train

import ebbiforge_core as ebbi

# 1. Create a global trajectory buffer (all agents log into ONE dataset)
buf = ebbi.TrajectoryBuffer(capacity=100_000)

# 2. Run the swarm, sample agent transitions each tick
swarm = ebbi.TensorSwarm(agent_count=1_000_000)
for tick in range(1000):
    swarm.tick()
    # Record (state_t, action_t, state_{t+1}) for sampled agents
    # buf.record(state_vec, action_vec, next_state_vec)

# 3. Train the dynamics model (AdamW, cosine LR, validation split)
predictor = ebbi.AutoregressivePredictor()
stats = predictor.train(
    buf.to_json(),
    epochs=100,
    learning_rate=0.01,
    batch_size=256,
    val_split=0.1
)

# 4. Persist learned weights
predictor.save_weights("world_model.safetensors")
print(f"Final loss: {stats[-1].train_loss:.6f}")

Architecture

state_t + action_t → [Linear + GELU] → [Residual + GELU] → [Linear] → state_{t+1}
                      ↑ AdamW optimizer, cosine LR decay, early stopping

The model learns the Markovian transition function: state_{t+1} = f(state_t, action_t). Neighbor interactions (surprise propagation, flocking, pheromone steering) are already encoded in each agent's state by the Rust engine.


CLI

ebbiforge demo             # Interactive 10K agent demo
ebbiforge benchmark        # Performance scaling 1K → 100K
ebbiforge example hello    # Run a built-in example
ebbiforge version          # System info

Architecture: Two-Tier Intelligence

┌────────────────────────────────────────────────┐
│          Tier 2: Conscious Brain               │
│  Python + LLM (Gemini)    EXPENSIVE, SMART     │
│  ReAct loop + tool use    Called only on anomaly│
├────────────────────────────────────────────────┤
│          Tier 1: Nervous System                │
│  Pure Rust (zero-cost)    FAST, FREE           │
│  Spatial hash + SIRS      100K-100M agents     │
│  Darwinian evolution      ~0.01ms per agent    │
│  Learned world model      AdamW dynamics MLP   │
└────────────────────────────────────────────────┘

Learn more: Architecture Guide | Quick Start


What's Inside

Core Engine (ebbiforge-core/)

  • Agent Orchestration: ReAct loop, handoffs, sequential/parallel/loop workflows
  • Memory: In-process (SharedMemory), Redis/Dragonfly (KV), Qdrant (vector)
  • Swarm: 100M agents via mmap, Fibonacci spatial hash, SIRS epidemiology, pheromones
  • World Model: Latent encoder (fastembed ONNX), autoregressive dynamics (Candle MLP), trajectory buffer
  • Safety: PredictiveSafetyShield, PII redaction, GDPR, rate limiting, policy engine
  • Intelligence: MetaCognition, IntrospectionEngine, CuriosityModule

Python Framework (ebbiforge/)

  • Primitives: Agent, Swarm, Runtime, Task, Memory
  • Connectors: DataSource, HTTPPoller, WebhookReceiver, RSSFeed
  • Outputs: OutputSink, ConsoleOutput, WebhookOutput, JSONFileOutput
  • CLI: demo, benchmark, example, version

Examples (examples/)

# Name What It Shows
1 Hello Swarm Multi-agent pipeline + belief provenance
2 Evolution 10K agents with Darwinian natural selection
3 Live Data Generic HTTPPoller → swarm signals
4 Selective Reasoning Two-tier: 50K Rust agents + LLM on anomaly
5 Compliance PII detection + GDPR + audit trail

Performance

Verified on standard hardware (details in BENCHMARKS.md):

Agents Avg Tick Throughput
10,000 ~0.5ms 20M agents/sec
100,000 ~5ms 20M agents/sec
10,000,000 ~500ms 20M agents/sec

Contributing

We welcome contributions! Especially around:

  • New DataSource connectors for different domains
  • New OutputSink adapters (Slack, Discord, PagerDuty)
  • SIMD optimizations for the Tier 1 engine
  • Example scripts for specific use cases

See CONTRIBUTING.md for details.


License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ebbiforge-0.1.0-cp313-cp313-manylinux_2_39_x86_64.whl (16.9 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.39+ x86-64

File details

Details for the file ebbiforge-0.1.0-cp313-cp313-manylinux_2_39_x86_64.whl.

File metadata

File hashes

Hashes for ebbiforge-0.1.0-cp313-cp313-manylinux_2_39_x86_64.whl
Algorithm Hash digest
SHA256 ffef8b7ea06a9d4406363333a34e01a1676261953f06925ca2c1118836800500
MD5 3ad843db5589c5fe2c3ed50567872d8f
BLAKE2b-256 832a2e921c82df598f3526e6cb98f781f2e43f7b99d25c9e10cf80430c5b9ded

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page