Skip to main content

LLM-driven agent-based information diffusion simulation

Project description

LLM Society — LLM-driven Information Diffusion

A Python package to simulate information diffusion with LLM-based agent conversations. It supports metric scoring in [0,1], segments-based personas, interventions, polished visualizations, and a simple CLI.

Links

  • Tutorial (LLM network, segments, interventions, custom graphs, export): docs/TUTORIAL.ipynb

Features

  • Segment-based persona configuration (proportions, flexible trait specs; optional segment names)
  • Random network generation with tie strengths, or use your own NetworkX graph
  • LLM-driven conversations and numeric scoring in [0,1] (metric-based), or simple/complex contagion modes
  • Tie strengths influence edge sampling, talk probability, conversation depth, and can grow/decay over time (even for non-adjacent pairs via all-pairs mode)
  • Optional agent memory to keep recent utterances in-context for longer-term continuity
  • Multi-metric scoring per topic (e.g., credibility, emotion, action intent) with user-defined prompts and joint JSON outputs
  • Interactive dashboards (Plotly/Bokeh) for rapid, shareable analysis
  • Group plots (by traits or by segment), intervention effect plots, centrality plots, animations
  • YAML/JSON config + CLI; exporting history/scores/conversations

Installation

  1. Python 3.10+
  2. Install
pip install -r requirements.txt
  1. Provide OpenAI key (LLM mode)
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
# or use a file (first line)
echo "<YOUR_OPENAI_API_KEY>" > api-key.txt

Quickstart (Notebook)

from llm_society.api import network
from llm_society.viz import set_theme

set_theme()
net = network(
  information="5G towers cause illness.",
  n=20, degree=4, rounds=10,
  talk_prob=0.25, mode="llm", complex_k=2, rng=0
)
net.simulate()             # conversations, score updates, summaries
net.plot(type="final")
net.plot(type="centrality", metric="degree", show_exposure=False)

Plotting

  • final: final node scores heatmap on the graph
  • coverage: coverage (exposed & score>0) over time
  • group: mean score by group (by="traits" with attr in segments' traits; or by="segment")
  • centrality: centrality vs final score; optionally add exposure panel via show_exposure=True
  • intervention: mean score over time with intervention marker; optionally group by traits
  • animation: animated score evolution

Advanced Capabilities

  • Grouping
    • Traits: net.plot(type="group", by="traits", attr="political")
    • Segment: net.plot(type="group", by="segment", groups=["High-Dem", "High-Rep"])
  • Interventions
    net = network(..., intervention_round=6, intervention_nodes=[0,1,2], intervention_content="Be skeptical...")
    net.simulate()
    net.plot(type="intervention", attr="political", groups=["Democrat","Republican"])
    
  • Custom Graph Personas
    • If you pass graph=G and omit segments, personas are built from node attributes (gender, race, age, religion, political; others go to extra).
  • All-pairs conversations (LLM mode)
    • Set conversation_scope="all_pairs" (or CLI --conversation-scope all_pairs) to allow any node pair to chat.
    • Pairs without edges start at weight 0 but still get a small selection chance; repeated conversations strengthen their tie and add the edge into the network.
  • Multi-metric scoring
    • Define metrics (list of {id,label,prompt}) so each conversation returns structured JSON with coordinated scores for both speakers.
    • The first metric acts as the "primary" score used by legacy APIs/plots; additional metrics are stored in history[*].scores_multi and can be visualized via net.plot(..., metric="emotion").
  • Intervention-only runs
    • Leave information="" and configure intervention_round, intervention_nodes, and intervention_content.
    • Agents chat casually until the intervention round starts, after which conversations probabilistically focus on the treatment content.
  • Agent memory
    • Set memory_turns_per_agent > 0 (e.g., 4–8) to inject that many recent utterances (self + partners) into each agent’s system prompt so they can recall past exchanges.

Exporting

net.export(
  history_csv="history.csv",
  scores_csv="scores_by_round.csv",
  conversations_jsonl="conversations.jsonl",
)

# interactive dashboard inside notebooks
fig = net.dashboard(engine="plotly", attr="political", metric="credibility")
fig

# or save to HTML manually
html = net.dashboard(engine="plotly", attr="political", metric="credibility", to_html=True)
Path("dashboard.html").write_text(html, encoding="utf-8")

CLI

# write an example config
llm-society --write-example-config my-config.yaml
# run with a config
llm-society --config my-config.yaml
# or run fully via flags
llm-society \
  --information "Claim text" --n 20 --degree 4 --rounds 10 \
  --depth 0.6 --depth-max 6 --edge-frac 0.5 --conversation-scope all \
  --seeds 0,1 --talk-prob 0.25 --mode llm --complex-k 2 --rng 0

Configuration (overview)

  • Core: n, degree, rounds, depth (0–1), max_convo_turns, edge_sample_frac
  • Seeds: seed_nodes, seed_score
  • Info/LLM: information (may be blank if an intervention is configured), talk_information_prob, model, metric_name, metric_prompt
  • Metrics: optional metrics=[{id,label,prompt}, ...] to request multi-dimensional scoring (first metric remains the default for legacy APIs)
  • Modes: contagion_mode in {llm, simple, complex}, complex_threshold_k
  • Conversation scope: conversation_scope in {edges, all}, pair_weight_epsilon (minimum sampling weight boost for zero-tie pairs)
  • Memory: memory_turns_per_agent (0 disables memory)
  • Personas: persona_segments (with proportion, traits, optional name)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_society-0.3.2.tar.gz (40.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_society-0.3.2-py3-none-any.whl (41.6 kB view details)

Uploaded Python 3

File details

Details for the file llm_society-0.3.2.tar.gz.

File metadata

  • Download URL: llm_society-0.3.2.tar.gz
  • Upload date:
  • Size: 40.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_society-0.3.2.tar.gz
Algorithm Hash digest
SHA256 23a103268c2ab9cbcafcb463d5c521ab01714efff3f30b3c8ee2dbe82f53ed6e
MD5 99c0dc9908d3c11fb7cf7ff416970a95
BLAKE2b-256 0b95f8e266df54ee03e96bf14ab20d6324054df34ef17c599ca2920a3a425f80

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_society-0.3.2.tar.gz:

Publisher: publish.yml on TianzhuQin/LLM-ABM-Network

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_society-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: llm_society-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 41.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_society-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7d6f1f99bb539be9272453d0c594b4cc08948f71ad73668d054da65f3d7cf735
MD5 2f5483b785309303dcfe63b117f70678
BLAKE2b-256 7c1fd2be68806a0a9038d9d8b6059d10836264b39d24faaaae7b1449a70d95f3

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_society-0.3.2-py3-none-any.whl:

Publisher: publish.yml on TianzhuQin/LLM-ABM-Network

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page