Skip to main content

A pythonic Very Small Size Soccer (VSSS) simulation environment for reinforcement learning research.

Project description

pSim Documentation

PyPI version License: GPLv3 Python versions Documentation

pSim Logo

pSim: Very Small Size Soccer Simulation Environment

Comprehensive simulation platform for autonomous robotic soccer development


Welcome to pSim

pSim is a comprehensive, pythonic simulation environment specifically designed for the Very Small Size Soccer (VSSS) competition. It provides researchers, students, and developers with powerful tools to develop, test, and validate autonomous robotic soccer strategies and algorithms.

Whether you're working on traditional control algorithms, reinforcement learning, or multi-agent coordination, pSim offers the flexibility and performance you need.


Quick Start

Installation

pSim uses uv for fast, reliable package management. See the installation guide for complete setup instructions including uv installation.

# Quick install with uv
uv pip install pSim

# Or with optional dependencies
uv pip install pSim[gym]   # For Gymnasium support
uv pip install pSim[zoo]   # For PettingZoo support
uv pip install pSim[all]   # For all optional dependencies

First Simulation

from pSim import SimpleVSSSEnv
import numpy as np

# Create your first simulation with custom robot counts
env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=2,      # Number of controllable robots
    num_adversary_robots=3   # Number of opponent robots
)
obs, info = env.reset()

# Run simulation steps with random actions
for step in range(1000):
    # Generate random actions for all agent robots [v, w] (velocity, angular velocity)
    action = np.random.uniform(-1, 1, (env.num_agent_robots, 2))
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

env.close()

Environment Types

pSim provides three main environment types, each designed for different use cases:

SimpleEnv - Traditional Control

Perfect for traditional control algorithms, manual testing, and educational purposes:

from pSim import SimpleVSSSEnv
import numpy as np

env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
for step in range(1000):
    # Generate random actions [v, w] for each controllable robot
    action = np.random.uniform(-1, 1, (env.num_agent_robots, 2))
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

Manual Control with Joystick

from pSim import SimpleVSSSEnv, HMI

hmi = HMI()

env = SimpleVSSSEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
while hmi.active:
    actions, reset, exit_requested = hmi()

    if exit_requested:
        break
    if reset:
        obs, info = env.reset()
        continue

    obs, reward, terminated, truncated, info = env.step(actions)

    if terminated or truncated:
        obs, info = env.reset()

hmi.quit()
env.close()

VSSSGymEnv - Reinforcement Learning

Full Gymnasium compatibility for single-agent reinforcement learning:

from pSim import VSSSGymEnv
from gymnasium.wrappers import FlattenObservation

env = FlattenObservation(VSSSGymEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
))

obs, info = env.reset()
for step in range(1000):
    action = env.action_space.sample()
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

VSSSPettingZooEnv - Multi-Agent Learning

PettingZoo interface for multi-agent reinforcement learning:

from pSim import VSSSPettingZooEnv

env = VSSSPettingZooEnv(
    render_mode="human",
    scenario="formation",
    num_agent_robots=3,
    num_adversary_robots=3
)

obs, info = env.reset()
for step in range(1000):
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}
    obs, rewards, terminations, truncations, infos = env.step(actions)
    if any(terminations.values()) or any(truncations.values()):
        obs, info = env.reset()

Key Features

Flexible Robot Control

Easy-to-use JSON-based configuration for scenarios, robot behaviors, and simulation parameters. Mix controllable agents with automatic behaviors:

{
  "scenarios": {
    "formation": {
      "agent_robots": {
        "movement_types": ["action", "ou", "no_move"]
      },
      "adversary_robots": {
        "movement_types": ["ou", "ou", "ou"]
      }
    }
  }
}
  • "action": Controllable by your agent/algorithm
  • "ou": Ornstein-Uhlenbeck automatic movement
  • "no_move": Stationary robots

Realistic Physics

Box2D-powered physics simulation with customizable parameters for accurate robot and ball dynamics.

Human-Machine Interface

  • Keyboard Controls: Full keyboard input with intuitive mappings
  • Joystick Support: Universal controller compatibility
  • Robot Switching: Dynamic selection between controllable robots
  • Team Management: Control different teams independently
  • Ball Control Mode: Direct ball manipulation

Use Cases

Traditional Control

  • PID controllers, MPC, and other traditional methods
  • Path planning and trajectory optimization
  • Formation control and cooperative behaviors

Reinforcement Learning

  • Compatible with Stable Baselines 3, Ray RLlib, and other RL libraries
  • Customizable reward functions and observation spaces
  • Curriculum learning through scenario configuration

Research & Education

  • Multi-agent coordination studies
  • Emergent behavior analysis
  • Algorithm benchmarking and comparison

Competition Preparation

  • Strategy development and testing
  • Opponent modeling and adaptation
  • Performance analysis and optimization

Documentation Structure

Getting Started

Examples

API Reference


Getting Help


Contributing

We welcome contributions! Please see our Contributing Guide for details.


Ready to start developing robotic soccer strategies?

Get Started View Examples GitHub Repository

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

psim-0.2.2.tar.gz (2.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

psim-0.2.2-py3-none-any.whl (2.8 MB view details)

Uploaded Python 3

File details

Details for the file psim-0.2.2.tar.gz.

File metadata

  • Download URL: psim-0.2.2.tar.gz
  • Upload date:
  • Size: 2.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for psim-0.2.2.tar.gz
Algorithm Hash digest
SHA256 5a5e2cfd07738cab7040adb864bd9bfa8310de020ed4445b2f55b476080d2bbd
MD5 88654b9bd4e29e861b80de2f6d2ede37
BLAKE2b-256 7f6039fc550bf4e1072cef2afcf08b3092d675df89d278ae6622c0b36ff7d92a

See more details on using hashes here.

File details

Details for the file psim-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: psim-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 2.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for psim-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 222726d13d64a938fd19abfac80d93cb909bbbff9194d7b985d5eb6f98ed0fba
MD5 df95ec3349b2dfd221d039d6c32feb6d
BLAKE2b-256 64ddaf8fbd565955864ace3b9961a1286618a74a4494003b0a57991398d16933

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page