Skip to main content

Multi-agent simulation environment with scenarios, analysis, and visualization

Project description

AgentSim

License: MIT Python 3.12 Tests

Multi-agent simulation framework for studying agent behaviors in grid-based environments. Supports reactive, deliberative (BDI), and reinforcement-learning agents with built-in scenarios, metrics, and ASCII visualization.


Features

  • Three agent architectures — Reactive (condition-action rules), Deliberative (BDI goals/beliefs/plans), and Q-learning agents in a single unified interface.
  • Grid environment — 2D grid with configurable walls, food resources, and multi-agent support; extend via BaseEnvironment.
  • Turnkey scenariosForagingScenario (multi-agent food collection) and PursuitScenario (predator-prey) runnable in one call.
  • Episode orchestrationSimulation drives agent-environment loops across multiple episodes, tracking per-agent rewards and step counts.
  • Metrics and analysiscompute_metrics and compute_trajectory_stats aggregate episode results into structured summaries.
  • ASCII visualizationrender_grid_ascii and simulation_report produce plain-text grid snapshots and run reports with no GUI dependency.

Quick Start

pip install agentsim
from agentsim import (
    GridEnvironment,
    LearningAgent,
    Simulation,
    SimulationConfig,
    compute_metrics,
    simulation_report,
)

# Build environment and agent
env = GridEnvironment(width=10, height=10, n_food=15, n_walls=8)
agent = LearningAgent("learner", position=(0, 0))
env.add_agent(agent)

# Run 20 episodes
cfg = SimulationConfig(max_steps=200, n_episodes=20)
sim = Simulation(env, [agent], config=cfg)
results = sim.run()

# Analyse and display
metrics = compute_metrics(results, [agent])
print(simulation_report(results, [agent]))
print(sim.summary())

Use a built-in scenario instead:

from agentsim import ForagingScenario, make_forager

scenario = ForagingScenario(grid_size=12, n_food=20)
agents = [make_forager(f"agent_{i}", position=(i, 0)) for i in range(3)]
result = scenario.run(agents)
print(f"Collected {result.total_collected} food in {result.steps} steps "
      f"(efficiency {result.efficiency:.2f})")

Architecture

agentsim/
├── agents/
│   ├── base.py          # BaseAgent, AgentState — abstract interface
│   ├── reactive.py      # ReactiveAgent — condition-action rules
│   ├── deliberative.py  # DeliberativeAgent — BDI (goals, beliefs, plans)
│   └── learning.py      # LearningAgent — Q-learning, epsilon-greedy
├── environment/
│   ├── base.py          # BaseEnvironment — reset/step/render contract
│   └── grid.py          # GridEnvironment — 2D grid, walls, food
├── scenarios/
│   ├── foraging.py      # ForagingScenario, make_forager()
│   └── pursuit.py       # PursuitScenario, make_predator(), make_prey()
├── simulation.py        # Simulation, SimulationConfig, EpisodeResult
├── analysis.py          # compute_metrics, compute_trajectory_stats
└── viz.py               # render_grid_ascii, simulation_report

Data flow per episode:

  1. Simulation.run_episode() calls env.reset() → returns initial observations per agent.
  2. Each step: agent.step(obs) → action → env.step(agent_id, action)(new_obs, reward, done).
  3. agent.receive_reward(reward) updates internal state; loop continues until done or max_steps.
  4. EpisodeResult collected; compute_metrics() aggregates across episodes.

Extension points:

  • New agent type: subclass BaseAgent, implement perceive() and decide().
  • New environment: subclass BaseEnvironment, implement reset(), step(), render().
  • New scenario: compose agents + environment setup and delegate to Simulation.

Development

git clone https://github.com/techknowmad/agent-sim.git
cd agent-sim
pip install -e ".[dev]"
pytest -v
ruff check .

Contributing

See CONTRIBUTING.md for branch, test, and PR conventions.


License

MIT — see LICENSE.


Built by TechKnowMad Labs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tkm_agentsim-0.1.0.tar.gz (102.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tkm_agentsim-0.1.0-py3-none-any.whl (21.2 kB view details)

Uploaded Python 3

File details

Details for the file tkm_agentsim-0.1.0.tar.gz.

File metadata

  • Download URL: tkm_agentsim-0.1.0.tar.gz
  • Upload date:
  • Size: 102.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for tkm_agentsim-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a5c7df3bc73e697a46e1bdc943608593b72d7a3e8833a95d68ee9d5d222f9f95
MD5 e102cbd625af8319d722e2df41072f07
BLAKE2b-256 12a69995c4d74ba587099c68447d9c295618bf5d21fa3fbfd97a1d6994997087

See more details on using hashes here.

File details

Details for the file tkm_agentsim-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: tkm_agentsim-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for tkm_agentsim-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6728d43a1d480b18085326b345cbed5dcbe9ab1704a79a6db25fb109b1f50187
MD5 abacaec852ed6eac1a125b2b03b1e8e9
BLAKE2b-256 0dbfe7200615ff79265d081b022d453583dede4a3614d4fb24ee139946ca4c19

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page