Skip to main content

Two AIs argue. You get a better answer. Multi-agent emergent reasoning engine.

Project description

amp — AI Debate Engine

Two AIs argue. You get a better answer.

PyPI Python 3.11+ License: MIT


Why amp?

A single AI has blind spots — it trained on the same data, has the same biases, and often gives the "safe" answer. amp makes two independent AIs argue about your question, then synthesizes the best answer from both.

Your question
     ↓
Agent A (GPT-5.2) ──────────────── Agent B (Claude Sonnet)
    [독립 분석, 병렬]                    [독립 분석, 병렬]
         ↓                                    ↓
         └─────────── Reconciler ─────────────┘
                           ↓
              Better Answer + CSER score
              (두 AI가 얼마나 다른 시각을 가졌는지)

CSER (Cross-agent Semantic Entropy Ratio): 두 AI의 의견 다양성 측정 지표. 높을수록 더 독립적인 사고.


Install

pip install amp-reasoning
amp init   # API 키 설정 (1분)
# 또는: API 키 없이 OAuth 무료 사용
amp login  # ChatGPT Plus + Claude Max 구독자 → 비용 0원

원클릭 설치:

curl -fsSL https://raw.githubusercontent.com/amp-reasoning/amp/main/install.sh | bash

Quick Start

# 바로 사용
amp "비트코인 지금 사야 할까?"
amp "React vs Vue in 2026 — which should I pick?"
amp "스타트업에서 CTO 역할을 맡아야 할까?"

# 4라운드 심층 토론 (더 오래 걸리지만 더 깊음)
amp --mode emergent "AGI가 2027년 전에 가능할까?"

# MCP 서버 (Claude Desktop, Cursor, OpenClaw 연동)
amp serve

How It Works

2-Round (기본): 독립 분석

Agent A와 B가 서로의 답을 모른 채 독립적으로 분석. → 진짜 독립적 사고 → 높은 CSER → 더 좋은 합성

4-Round (심층): 순차 토론

Round 1: A 분석
Round 2: B가 A를 반박
Round 3: A가 B의 반박에 재반론
Round 4: B 최종 반박
        → Reconciler 합성

CSER Gate

두 AI 답변이 너무 비슷하면 (CSER < 0.30) → 자동으로 4-round로 업그레이드. 더 다양한 시각을 강제로 끌어냄.


Configuration

amp init  # 대화형 설정

또는 ~/.amp/config.yaml 직접 편집:

agents:
  agent_a:
    provider: openai
    model: gpt-5.4          # 최신 GPT-5 계열
    reasoning_effort: medium # none | low | medium | high | xhigh

  agent_b:
    provider: anthropic     # ANTHROPIC_API_KEY 있으면 (빠름)
    # provider: anthropic_oauth  # Claude OAuth 무료 (느림, subprocess)
    model: claude-sonnet-4-6

amp:
  parallel: true   # Agent A+B 병렬 실행 (기본: true, ~50% 속도 향상)
  timeout: 90      # 에이전트당 타임아웃 (초)
  kg_path: ~/.amp/kg.db  # 지식 그래프 저장 경로

Provider 옵션

provider 속도 비용 조건
openai ⚡⚡⚡ 유료 OPENAI_API_KEY
openai_oauth ⚡⚡⚡ 무료 ChatGPT Plus/Pro + amp login
anthropic ⚡⚡⚡ 유료 ANTHROPIC_API_KEY
anthropic_oauth ⚡⚡ 무료 Claude Max/Pro + amp login
local ⚡⚡ 무료 Ollama 실행 중

💡 완전 무료 조합:

amp login  # ChatGPT Plus + Claude Max 구독 있으면 API 비용 0원
# → openai_oauth (GPT-5.4) × anthropic_oauth (Claude Sonnet) 자동 설정

MCP Server

Claude Desktop, Cursor, OpenClaw 등 MCP 호환 클라이언트에서 사용:

amp serve  # http://127.0.0.1:3010

MCP 설정에 추가:

{
  "amp": {
    "url": "http://127.0.0.1:3010"
  }
}

사용 가능한 도구:

  • analyze — 2-round 독립 분석 (15~30초)
  • debate — 4-round 심층 토론 (30~60초)
  • quick_answer — 단일 LLM 빠른 답변 (3초)

Docker

# 서버만
docker run -e OPENAI_API_KEY=... -e ANTHROPIC_API_KEY=... -p 3010:3010 ghcr.io/amp-reasoning/amp

# docker-compose
OPENAI_API_KEY=... ANTHROPIC_API_KEY=... docker-compose up

Python API

from amp.core import emergent
from amp.config import load_config

config = load_config()
result = emergent.run(query="Should I use Rust or Go?", context=[], config=config)

print(result["answer"])
print(f"CSER: {result['cser']:.2f}")  # 두 AI 시각 다양성
print(f"Agreements: {result['agreements']}")

Performance (2026-03 기준)

구성 평균 응답시간 비용
GPT-5.2 + Claude Sonnet (API, 병렬) ~18초 $0.03~0.08
GPT-5.2 + Claude OAuth (병렬) ~35초 $0.01~0.03
GPT-5.2 + GPT-5.2 (같은 벤더) ~15초 $0.02~0.05

병렬화로 기존 대비 ~50% 속도 향상 (v0.1.0+)


Why Cross-Vendor?

GPT와 Claude는 다른 회사가, 다른 데이터로, 다른 방법으로 훈련했습니다. 같은 질문에 다른 관점을 가질 가능성이 높습니다. 이것이 amp의 핵심 — 교차 벤더 합성.

같은 벤더 (GPT+GPT)도 동작하지만, amp는 자동으로 페르소나를 극단적으로 다르게 설정해 다양성을 확보합니다.


Contributing

git clone https://github.com/amp-reasoning/amp
cd amp
pip install -e ".[dev]"
pytest

License

MIT © 2026 amp contributors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amp_reasoning-0.1.1.tar.gz (304.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

amp_reasoning-0.1.1-py3-none-any.whl (86.2 kB view details)

Uploaded Python 3

File details

Details for the file amp_reasoning-0.1.1.tar.gz.

File metadata

  • Download URL: amp_reasoning-0.1.1.tar.gz
  • Upload date:
  • Size: 304.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amp_reasoning-0.1.1.tar.gz
Algorithm Hash digest
SHA256 bff365f9227c6a5363cd57bd470b1c8d27db007605c03f314f4438f73ac41205
MD5 ddbc421e3b1cd5516e98a3c97a937f6c
BLAKE2b-256 82fa6cea84e2dcfb221b1ed10829f6f0b9ecb292644a0cb670a76ad25b0fe3ce

See more details on using hashes here.

Provenance

The following attestation bundles were made for amp_reasoning-0.1.1.tar.gz:

Publisher: publish.yml on dragon1086/amp-assistant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file amp_reasoning-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: amp_reasoning-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 86.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amp_reasoning-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ab6c1ab89613e3c2a25202a3ae5578900a5a566ca05e69514346782058b78f26
MD5 4b5d836e7cd9634571a1339b6bb4e2b7
BLAKE2b-256 cab5694a4c7ff6dd9bf8f703da8a0db9ce6e601efc30ba8cba844b6236956a07

See more details on using hashes here.

Provenance

The following attestation bundles were made for amp_reasoning-0.1.1-py3-none-any.whl:

Publisher: publish.yml on dragon1086/amp-assistant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page