Skip to main content

Predictive intelligence through agent-based population simulation

Project description

Entropy

CI PyPI version Python License: MIT

Predictive intelligence through agent-based population simulation. Create synthetic populations grounded in real-world data, simulate how they respond to events, and watch opinions emerge through social networks.

Not a survey. Not a poll. A simulation of collective human behavior.

What It Does

You describe a population and a scenario. Entropy builds statistically grounded synthetic agents, connects them in a social network, and has each one reason individually about the event using an LLM. Opinions form, spread through the network, and evolve — producing distributional predictions you can segment and analyze.

entropy spec → entropy extend → entropy sample → entropy network → entropy persona → entropy scenario → entropy simulate
                                                                                                  │              │
                                                                                           entropy estimate    entropy results

Install

pip install entropy-predict

Or from source:

git clone https://github.com/exaforge/entropy.git
cd entropy
pip install -e ".[dev]"

Setup

# API keys (in .env or exported)
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

# Or for Azure OpenAI:
# export AZURE_OPENAI_API_KEY=...
# export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# export AZURE_OPENAI_DEPLOYMENT=your-deployment-name

# Configure providers (openai, claude, or azure_openai)
entropy config set pipeline.provider claude      # Claude for population/scenario building
entropy config set simulation.provider openai    # OpenAI for agent reasoning
entropy config show

Quick Start

# Build a population
entropy spec "500 Austin TX commuters who drive into downtown for work" -o austin/base.yaml
entropy extend austin/base.yaml -s "Response to a $15/day downtown congestion tax" -o austin/population.yaml
entropy sample austin/population.yaml -o austin/agents.json --seed 42
entropy network austin/agents.json -o austin/network.json -p austin/population.yaml --seed 42
entropy persona austin/population.yaml --agents austin/agents.json

# Compile and run a scenario
entropy scenario -p austin/population.yaml -a austin/agents.json -n austin/network.json -o austin/scenario.yaml
entropy simulate austin/scenario.yaml -o austin/results/ --seed 42

# View results
entropy results austin/results/
entropy results austin/results/ --segment income

What Comes Out

Outcomes are defined per-scenario — categorical, float, boolean, or open-ended. You choose what to measure.

═══════════════════════════════════════════════════════════
SIMULATION RESULTS: austin_congestion_tax
═══════════════════════════════════════════════════════════

Population: 500 agents | Duration: 47 timesteps | Model: gpt-5
Stopped: exposure_rate > 0.95 and no_state_changes_for > 5

EXPOSURE
────────────────────────────────────────
Final exposure rate: 96.8%
Reasoning calls: 1,847
Average conviction: 0.64 (moderate-to-firm)

OUTCOMES
────────────────────────────────────────
commute_response (categorical):
  drive_and_pay          38%  ███████████████░░░░░
  switch_to_transit      24%  █████████░░░░░░░░░░░
  shift_schedule         19%  ███████░░░░░░░░░░░░░
  telework_more          12%  ████░░░░░░░░░░░░░░░░
  undecided               7%  ██░░░░░░░░░░░░░░░░░░

sentiment (float, -1 to 1):
  mean: -0.18  std: 0.41  min: -0.9  max: 0.7

willingness_to_pay (boolean):
  yes: 42%  no: 58%

protest_likelihood (float, 0 to 1):
  mean: 0.31  std: 0.28

SEGMENT: income
────────────────────────────────────────
< $50k:   drive_and_pay 22% | switch_to_transit 14% | protest 41%
$50-100k: drive_and_pay 40% | switch_to_transit 28% | shift_schedule 21%
> $100k:  drive_and_pay 51% | switch_to_transit 31% | telework_more 14%

Each agent reasoned individually. A low-income commuter with no transit access reacts differently than a tech worker near a rail stop — not because we scripted it, but because their attributes, persona, and social context led them there.

The scenario YAML controls what gets tracked:

outcomes:
  suggested_outcomes:
  - name: commute_response
    type: categorical
    options: [drive_and_pay, switch_to_transit, shift_schedule, telework_more, undecided]
  - name: sentiment
    type: float
    range: [-1.0, 1.0]
  - name: willingness_to_pay
    type: boolean
  - name: protest_likelihood
    type: float
    range: [0.0, 1.0]

How It Works

Population creation — An LLM discovers relevant attributes (demographics, psychographics, scenario-specific), then researches real-world distributions with citations. Agents are sampled from these distributions respecting all dependencies. A social network connects them based on attribute similarity with small-world properties — the network config (which attributes matter, what edge types exist, who influences whom) is generated by the LLM from the population spec.

Persona rendering — Each agent gets a first-person narrative built from their attributes. Relative traits are positioned against population statistics ("I'm much more price-sensitive than most people"). Generated once per population, applied computationally per agent.

Two-pass reasoning — Pass 1: the agent role-plays their reaction in natural language (no enum labels, no anchoring). Pass 2: a cheap model classifies the freeform response into outcome categories. This eliminates the central tendency bias that plagues single-pass structured extraction.

Network propagation — Agents share information through social connections. Edge types, spread modifiers, and decay control how opinions travel. Multi-touch re-reasoning lets agents update their position after hearing from multiple peers.

Documentation

  • CLI Reference — Every command with arguments, options, and examples
  • Architecture — How the system works under the hood
  • Use Cases — What Entropy can simulate, with examples

Development

pip install -e ".[dev]"
pytest                    # Run tests
ruff check .              # Lint
ruff format .             # Format

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

entropy_predict-0.2.0.tar.gz (339.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

entropy_predict-0.2.0-py3-none-any.whl (246.5 kB view details)

Uploaded Python 3

File details

Details for the file entropy_predict-0.2.0.tar.gz.

File metadata

  • Download URL: entropy_predict-0.2.0.tar.gz
  • Upload date:
  • Size: 339.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for entropy_predict-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e471581e9982c6f29d1629a11b337114aa7997022c8a8a02db57fdd7f154717f
MD5 92342851cfa9e5c140eb7620cc3b7769
BLAKE2b-256 e09a82c7d8199a8dba696372839b53158e606ab8c9032fbf6fc76cb80d127d9e

See more details on using hashes here.

Provenance

The following attestation bundles were made for entropy_predict-0.2.0.tar.gz:

Publisher: publish.yml on exaforge/entropy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file entropy_predict-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: entropy_predict-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 246.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for entropy_predict-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d6d3ebc3c4659537a1330df1915f75513faafec72b119fdecbf712c8a1f1b650
MD5 0cd5903d779f92650bdaf3bae87566a6
BLAKE2b-256 3f40c4b5a520c7d21540ec25377f9a43fa1ef0a76459fa296d531dee0964fe93

See more details on using hashes here.

Provenance

The following attestation bundles were made for entropy_predict-0.2.0-py3-none-any.whl:

Publisher: publish.yml on exaforge/entropy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page