Skip to main content

A pythonic Very Small Size Soccer (VSSS) simulation environment for reinforcement learning research.

Project description

pSim Documentation

PyPI version License: GPLv3 Python versions Documentation

pSim Logo

pSim: Very Small Size Soccer Simulation Environment

Comprehensive simulation platform for autonomous robotic soccer development


Welcome to pSim

pSim is a comprehensive, pythonic simulation environment specifically designed for the Very Small Size Soccer (VSSS) competition. It provides researchers, students, and developers with powerful tools to develop, test, and validate autonomous robotic soccer strategies and algorithms.

Whether you're working on traditional control algorithms, reinforcement learning, or multi-agent coordination, pSim offers the flexibility and performance you need.


Quick Start

Installation

pSim uses uv for fast, reliable package management. See the installation guide for complete setup instructions including uv installation.

# Quick install with uv
uv pip install pSim

# Or with optional dependencies
uv pip install pSim[gym]   # For Gymnasium support
uv pip install pSim[zoo]   # For PettingZoo support
uv pip install pSim[all]   # For all optional dependencies

First Simulation

from pSim import SimpleVSSSEnv
import numpy as np

# Create your first simulation with custom robot counts
env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=2,      # Number of controllable robots
    num_adversary_robots=3   # Number of opponent robots
)
obs, info = env.reset()

# Run simulation steps with random actions
for step in range(1000):
    # Generate random actions for all agent robots [v, w] (velocity, angular velocity)
    action = np.random.uniform(-1, 1, (env.num_agent_robots, 2))
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

env.close()

Environment Types

pSim provides three main environment types, each designed for different use cases:

SimpleEnv - Traditional Control

Perfect for traditional control algorithms, manual testing, and educational purposes:

from pSim import SimpleVSSSEnv
import numpy as np

env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
for step in range(1000):
    # Generate random actions [v, w] for each controllable robot
    action = np.random.uniform(-1, 1, (env.num_agent_robots, 2))
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

Manual Control with Joystick

from pSim import SimpleVSSSEnv, HMI

hmi = HMI()

env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
while hmi.active:
    actions, reset, exit_requested = hmi()

    if exit_requested:
        break
    if reset:
        obs, info = env.reset()
        continue

    obs, reward, terminated, truncated, info = env.step(actions)

    if terminated or truncated:
        obs, info = env.reset()

hmi.quit()
env.close()

VSSSGymEnv - Reinforcement Learning

Full Gymnasium compatibility for single-agent reinforcement learning:

from pSim import VSSSGymEnv
from gymnasium.wrappers import FlattenObservation

env = FlattenObservation(VSSSGymEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
))

obs, info = env.reset()
for step in range(1000):
    action = env.action_space.sample()
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

VSSSPettingZooEnv - Multi-Agent Learning

PettingZoo interface for multi-agent reinforcement learning:

from pSim import VSSSPettingZooEnv

env = VSSSPettingZooEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
for step in range(1000):
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}
    obs, rewards, terminations, truncations, infos = env.step(actions)
    if any(terminations.values()) or any(truncations.values()):
        obs, info = env.reset()

Key Features

Flexible Robot Control

Easy-to-use JSON-based configuration for scenarios, robot behaviors, and simulation parameters. Mix controllable agents with automatic behaviors:

{
  "scenarios": {
    "formation": {
      "agent_robots": {
        "movement_types": ["action", "ou", "no_move"]
      },
      "adversary_robots": {
        "movement_types": ["ou", "ou", "ou"]
      }
    }
  }
}
  • "action": Controllable by your agent/algorithm
  • "ou": Ornstein-Uhlenbeck automatic movement
  • "no_move": Stationary robots

Realistic Physics

Box2D-powered physics simulation with customizable parameters for accurate robot and ball dynamics.

Human-Machine Interface

  • Keyboard Controls: Full keyboard input with intuitive mappings
  • Joystick Support: Universal controller compatibility
  • Robot Switching: Dynamic selection between controllable robots
  • Team Management: Control different teams independently
  • Ball Control Mode: Direct ball manipulation

Use Cases

Traditional Control

  • PID controllers, MPC, and other traditional methods
  • Path planning and trajectory optimization
  • Formation control and cooperative behaviors

Reinforcement Learning

  • Compatible with Stable Baselines 3, Ray RLlib, and other RL libraries
  • Customizable reward functions and observation spaces
  • Curriculum learning through scenario configuration

Research & Education

  • Multi-agent coordination studies
  • Emergent behavior analysis
  • Algorithm benchmarking and comparison

Competition Preparation

  • Strategy development and testing
  • Opponent modeling and adaptation
  • Performance analysis and optimization

Documentation Structure

Getting Started

Examples

API Reference


Getting Help


Contributing

We welcome contributions! Please see our Contributing Guide for details.


Ready to start developing robotic soccer strategies?

Get Started View Examples GitHub Repository

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

psim-0.2.1.tar.gz (2.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

psim-0.2.1-py3-none-any.whl (2.8 MB view details)

Uploaded Python 3

File details

Details for the file psim-0.2.1.tar.gz.

File metadata

  • Download URL: psim-0.2.1.tar.gz
  • Upload date:
  • Size: 2.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for psim-0.2.1.tar.gz
Algorithm Hash digest
SHA256 bdd635cfed175487b74f5fc79d8effe9060db2329d1bccf8b410095121898c9c
MD5 77e377b5f875f3cae406f7810186500f
BLAKE2b-256 579fd0dba3c780d0daeef628583ab34937066583f9c173a737491bbe92093cce

See more details on using hashes here.

File details

Details for the file psim-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: psim-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 2.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for psim-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c0af0cb008cbcc2a50d9034e93e98b6af6a989781aaf6185fb5cbdf9f38d5c30
MD5 bb951d197514dd9dd683ad394a56343e
BLAKE2b-256 668871a1f086e029aae93686f3f70f97f6b823ed8b28e889b8fb74f95721d0ed

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page