Skip to main content

Async multi-model conversation orchestration (fan_out, daisy_chain, room_all, room_synthesized, council, roleplay, MultiModelHub, ModelInstance)

Project description

Emergent — Multi-Model AI Hub

Live App PyPI License: Apache 2.0

Chat with GPT, Claude, Gemini, Grok, DeepSeek, and Perplexity in parallel. Compare, synthesize, and chain their responses — all in one interface.


Try it live

emergentapp.interdependentway.org

Tier Price What you get
Free $0 5 instances, 10 runs/month, basic responses and chat
Supporter $5/month 15 instances, 30 runs/month, removes "Made with Emergent" badge, Hall of Makers eligibility, early feature voting, private thank-you channel
Coffee $5 one-time Supporter perks + Hall of Makers eligibility
Builder $25 one-time Supporter perks + Hall of Makers eligibility (Builder tier)
Patron $50 one-time Supporter perks + Hall of Makers eligibility (Patron tier)
Pro $19/month or $149/year Unlimited instances & runs, advanced synthesis, priority support, all supporter perks
Team $49/month (3 seats) Unlimited instances & runs, shared workspace, admin controls, all supporter perks
Team extra seat $15/month Adds 1 extra seat to an existing Team plan

What is this?

Emergent is a multi-model AI hub that lets you send one prompt to multiple LLMs simultaneously and work with their responses together. Instead of copy-pasting between chat tabs, you get a single interface with color-coded, side-by-side or stacked responses from every model you care about.

Beyond simple fan-out, Emergent supports structured interaction patterns — synthesis (feed multiple responses into one model for analysis), shared rooms (models that see and respond to each other), daisy chains (A→B→C sequential pipelines), council mode, and roleplay scenarios. The EDCM engine analyzes conversation transcripts across six cognitive metrics and surfaces actionable insights.


Interaction Patterns

Pattern What it does
Fan-out Send one prompt to N models in parallel
Synthesis Select responses, send to a synthesis model for analysis
Shared Room (All) All models see each other's responses and reply in rounds
Shared Room (Synthesized) Responses synthesized first, then drive the next round
Daisy Chain Model A → B → C sequentially, each seeing the previous response
Council Each model synthesizes all responses including its own
Roleplay DM-driven roleplay with initiative ordering and reactions

Self-hosting

Backend (FastAPI + MongoDB)

cd backend
pip install -r requirements.txt

# Required env vars
export MONGO_URI="mongodb://localhost:27017"
export JWT_SECRET="your-secret"
export STRIPE_SECRET_KEY="sk_..."        # optional: for payments
export STRIPE_WEBHOOK_SECRET="whsec_..." # optional: for webhooks

uvicorn server:app --reload

Frontend (React)

cd frontend
npm install
npm start

The frontend expects the backend at http://localhost:8000 by default.


aimmh-lib — the open-source core

The orchestration patterns are extracted into a standalone, zero-dependency Python library.

pip install aimmh-lib

Functional API

import asyncio
from aimmh_lib import fan_out

async def call_model(model_id: str, messages: list[dict]) -> str:
    # plug in any model backend here
    return f"Response from {model_id}"

async def main():
    results = await fan_out(
        call_fn=call_model,
        model_ids=["gpt-4o", "claude-sonnet-4-6", "gemini-2.0-flash"],
        messages=[{"role": "user", "content": "What is the best programming language?"}],
    )
    for r in results:
        print(f"{r.model_id}: {r.content}")

asyncio.run(main())

Instantiation API

Bind a backend once and call any pattern as a method:

from aimmh_lib import MultiModelHub

hub = MultiModelHub(call_model)

results = await hub.fan_out(["gpt-4o", "claude-sonnet-4-6"], messages)
results = await hub.daisy_chain(["gpt-4o", "claude-sonnet-4-6", "gemini-2.0-flash"], "Explain gravity")
results = await hub.council(["gpt-4o", "claude-sonnet-4-6"], "What is consciousness?")

Stateful single-model conversations

from aimmh_lib import ModelInstance

gpt = ModelInstance(call_model, "gpt-4o", system_context="You are a Socratic tutor.")
r1 = await gpt.send("What is entropy?")
r2 = await gpt.send("Give me an example.")  # history carries over automatically
print(gpt.history)
gpt.clear()  # reset to fresh state

All six patterns available: fan_out, daisy_chain, room_all, room_synthesized, council, roleplay.

PyPI →


Tech Stack

Backend: FastAPI · Motor (async MongoDB) · asyncio · Stripe · Google OAuth · JWT

Frontend: React · Tailwind CSS · Shadcn UI · React Router

Library: Pure Python 3.11+ · zero runtime dependencies


Repository Structure

aimmh_lib/   # pip install aimmh-lib — zero-dep async orchestration library
backend/     # FastAPI service (auth, multi-model chat, payments, EDCM)
frontend/    # React UI

License

aimmh_lib/ is licensed under the Apache License 2.0. The backend and frontend are proprietary — you may self-host for personal use but may not offer them as a competing hosted service.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aimmh_lib-1.1.0.tar.gz (14.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aimmh_lib-1.1.0-py3-none-any.whl (13.4 kB view details)

Uploaded Python 3

File details

Details for the file aimmh_lib-1.1.0.tar.gz.

File metadata

  • Download URL: aimmh_lib-1.1.0.tar.gz
  • Upload date:
  • Size: 14.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for aimmh_lib-1.1.0.tar.gz
Algorithm Hash digest
SHA256 2549a7c47603d1fae09f8576a0262c14564ff1161d1fcf5a135c7447a0cee10f
MD5 3c425df5dcfff8783cbbb13a6e380064
BLAKE2b-256 f540ffdee0403bedfd18302ee0e42699de1c42177518a59c71b3f0283e3bd584

See more details on using hashes here.

File details

Details for the file aimmh_lib-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: aimmh_lib-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for aimmh_lib-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 098746177dbaaa986e19eabc26f9345f405058e154e63a59a928f9a145f64232
MD5 e232a38f9f6868dfacfc3dd766ca10bd
BLAKE2b-256 fdaefaa2055448e66d7526ad955785195585cec0d5b44baca9e908f40ab4d66a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page