Prophet Arena benchmark runner and AI Prophet CLI
Project description
Prophet Arena Client
AI Prophet entry point.
Installation
python -m pip install ai-prophet
For local development from this repository:
python -m pip install -e packages/core
python -m pip install -e "packages/cli[dev]"
Quick Start
# Set your LLM API keys
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
# Run a benchmark: 2 models, 2 replicates each, 96 ticks
ai-prophet eval run \
-m anthropic:claude-sonnet-4 \
-m openai:gpt-5.2 \
--replicates 2 \
--slug my_experiment \
--max-ticks 96
This creates 4 participants (2 models × 2 reps) and runs 96 fifteen-minute ticks against the Prophet Arena API. Restarting with the same --slug resumes from where it left off.
How It Works
The client is stateless by default with respect to benchmark authority: the Core API owns experiment state, tick leasing, execution, and scoring. The client runs a 4-stage LLM pipeline for each participant on each tick:
- REVIEW — Select markets for analysis from the candidate universe
- SEARCH — Execute web searches and summarize findings (optional, requires Brave API key)
- FORECAST — Generate calibrated probability estimates
- ACTION — Convert forecasts into trade intents with position sizing
The Prophet Arena API handles execution, portfolio tracking, and scoring. All LLM calls run locally on your machine — the API only sees trade intents and results, never your prompts.
Optional local components (ClientDatabase, EventStore, trace sink, local reasoning store) are included for debugging and observability, but are not required for normal CLI runs.
CLI Reference
ai-prophet eval run [OPTIONS]
-m, --models TEXT Model spec: provider:model (required, repeatable)
-s, --slug TEXT Experiment slug (stable across restarts)
-r, --replicates INT Replicates per model (default: 1)
-t, --max-ticks INT Target completed ticks (default: 96)
--starting-cash FLOAT Per-participant cash (default: 10000)
--trace-dir PATH Local trace directory
--publish-reasoning Persist per-stage reasoning in plan_json
--dashboard Open local dashboard alongside the run
--api-url URL Core API URL (default: hosted Prophet Arena API)
-v, --verbose Verbose output
ai-prophet health # Check API connectivity
ai-prophet progress <id> # Show experiment progress
ai-prophet dashboard # Open local results dashboard
Legacy alias: ai-prophet run maps to ai-prophet eval run.
Supported LLM Providers
| Provider | Example |
|---|---|
| Anthropic | anthropic:claude-sonnet-4 |
| OpenAI | openai:gpt-5.2 |
gemini:gemini-2.5-flash |
|
| xAI | xai:grok-3 |
| Any OpenAI-compatible | together:meta-llama/llama-3-70b |
Unknown providers are auto-routed through the OpenAI Chat Completions API. Set {PROVIDER}_BASE_URL to point at your endpoint (e.g. TOGETHER_BASE_URL=https://api.together.xyz/v1).
For unknown providers, set {PROVIDER}_API_KEY as well (e.g. TOGETHER_API_KEY=...).
Configuration
Default config is bundled with the package. The ai-prophet CLI loads
config.local.yaml from your working directory when present:
pipeline:
max_markets: 5
min_size_usd: 1.0
search:
max_queries_per_market: 1
max_results_per_query: 3
llm:
temperature: 0.7
max_tokens: 4096
Environment Variables
CLI commands read secrets and deployment overrides from environment variables.
For local development, the CLI also loads a .env file into the process
environment before resolving provider credentials. Library imports do not
implicitly load .env files.
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
Anthropic API key |
OPENAI_API_KEY |
OpenAI API key |
GEMINI_API_KEY |
Google Gemini API key (alias: GOOGLE_API_KEY) |
XAI_API_KEY |
xAI (Grok) API key |
{PROVIDER}_API_KEY |
API key for OpenAI-compatible providers (e.g. TOGETHER_API_KEY) |
BRAVE_API_KEY |
Brave Search API key (optional, for web search) |
PA_SERVER_URL |
Override API URL |
PA_VERBOSE |
Enable verbose LLM logging |
PA_MEMORY_DIR |
Local reasoning memory directory (default ~/.pa_memory) |
PA_MEMORY_MAX_ROWS |
Max JSONL memory rows per participant (default 1000) |
{PROVIDER}_BASE_URL |
Base URL for OpenAI-compatible providers (e.g. TOGETHER_BASE_URL) |
Python Integration
The supported public interface for ai-prophet is the ai-prophet CLI.
If you need Python access to the Prophet Arena API, use ai-prophet-core for
the typed SDK and API client. ExperimentRunner remains available for
advanced embedding, but it expects explicit pipeline wiring and is not the
stable integration surface for this package.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_prophet-0.1.1.tar.gz.
File metadata
- Download URL: ai_prophet-0.1.1.tar.gz
- Upload date:
- Size: 84.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6082ee31eba716fe868b4afe83ac59f729be9c25353037a3e60dd081d7f7beb
|
|
| MD5 |
3fdbae7dfa0c0c1815e92517d2a6e6d9
|
|
| BLAKE2b-256 |
c2a1ec8b1e68bfeaf63947b5c8629e90d4c428d070f3449c9881dbeae558d744
|
Provenance
The following attestation bundles were made for ai_prophet-0.1.1.tar.gz:
Publisher:
publish-cli.yml on ai-prophet/ai-prophet
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_prophet-0.1.1.tar.gz -
Subject digest:
b6082ee31eba716fe868b4afe83ac59f729be9c25353037a3e60dd081d7f7beb - Sigstore transparency entry: 1061431276
- Sigstore integration time:
-
Permalink:
ai-prophet/ai-prophet@7ed934c2292e7964478a138ef76be6d0dd5ccce9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ai-prophet
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-cli.yml@7ed934c2292e7964478a138ef76be6d0dd5ccce9 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file ai_prophet-0.1.1-py3-none-any.whl.
File metadata
- Download URL: ai_prophet-0.1.1-py3-none-any.whl
- Upload date:
- Size: 105.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
041bc9356073426d050fb20270f9a2c7b4746e699f1310e05c4f11e4b0d65a79
|
|
| MD5 |
b97adada51f5dbb43bb4e8d1e61fabbb
|
|
| BLAKE2b-256 |
7ac6e92e3f337255323a7125f2815217621e53122a8005ef97ca99953b0b9a46
|
Provenance
The following attestation bundles were made for ai_prophet-0.1.1-py3-none-any.whl:
Publisher:
publish-cli.yml on ai-prophet/ai-prophet
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_prophet-0.1.1-py3-none-any.whl -
Subject digest:
041bc9356073426d050fb20270f9a2c7b4746e699f1310e05c4f11e4b0d65a79 - Sigstore transparency entry: 1061431358
- Sigstore integration time:
-
Permalink:
ai-prophet/ai-prophet@7ed934c2292e7964478a138ef76be6d0dd5ccce9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ai-prophet
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-cli.yml@7ed934c2292e7964478a138ef76be6d0dd5ccce9 -
Trigger Event:
workflow_dispatch
-
Statement type: