Skip to main content

A Gymnasium-compatible implementation of the Left & Right Retina Problem

Project description

Gymnasium Retina Task

Python 3.10+ pre-commit License: MIT Code style: black

A Gymnasium-compatible implementation of the Left & Right Retina Problem, a benchmark task for testing the evolution of modular neural networks.

Gymnasium Retina Task Preview

Overview

The Retina Task is based on the work by Risi & Stanley (Artificial Life 2012) on evolving modular neural networks using ES-HyperNEAT. The task tests an agent's ability to independently classify patterns on the left and right sides of a 4x2 artificial retina.

This implementation is:

  • ML-agnostic: Clean separation from evolutionary algorithms
  • Gymnasium-compatible: Follows Farama Foundation standards
  • Well-tested: Comprehensive test suite included
  • Easy to use: Simple API with multiple evaluation modes

Installation

Using uv (recommended):

uv pip install -e .

Using pip:

pip install -e .

Quick Start

import gymnasium as gym
import gymnasium_retinatask

# Create environment
env = gym.make("RetinaTask-v0")

# Reset environment
obs, info = env.reset()

# Take a step
action = env.action_space.sample()  # Random classification
obs, reward, terminated, truncated, info = env.step(action)

env.close()

The Task

The artificial retina consists of 8 pixels arranged in a 4x2 grid:

Left side (4 pixels) | Right side (4 pixels)

Objective

The agent must independently classify whether patterns on each side are "valid":

  • Left output: 1.0 if left pattern is valid, 0.0 if invalid
  • Right output: 1.0 if right pattern is valid, 0.0 if invalid

Pattern Distribution

Out of 256 possible patterns (2^8):

  • 64 patterns have both sides valid (25%)
  • 64 patterns have only left valid (25%)
  • 64 patterns have only right valid (25%)
  • 64 patterns have neither side valid (25%)

Why This Task?

This task is designed to test modularity in neural networks. The left and right classification problems should ideally be solved by separate, independent modules in the network. This makes it an excellent benchmark for:

  • Modular neural network evolution
  • Structure learning algorithms
  • Neuroevolution techniques (NEAT, HyperNEAT, ES-HyperNEAT)

Environment Details

Observation Space

Box(0, 1, (8,), float32) - 8 retina pixels, each either 0 (off) or 1 (on)

Action Space

Box(0, 1, (2,), float32) - Classification outputs:

  • action[0]: Left side classification
  • action[1]: Right side classification

Rewards

By default, uses the fitness function from the original paper:

reward = 1000.0 / (1.0 + error)

where error is the sum of absolute differences between outputs and correct labels.

Episode Modes

The environment supports three modes:

  1. Single Pattern (default): One random pattern per episode

    env = gym.make("RetinaTask-v0", mode="single_pattern")
    
  2. Batch: Fixed number of random patterns per episode

    env = gym.make("RetinaTask-v0", mode="batch", batch_size=100)
    
  3. Full Evaluation: All 256 patterns in sequence

    env = gym.make("RetinaTask-v0", mode="full_evaluation")
    

Reward Types

  • paper (default): Uses 1000.0 / (1.0 + error) fitness function
  • simple: Returns negative error directly
env = gym.make("RetinaTask-v0", reward_type="simple")

Examples

Random Agent

import gymnasium as gym
import gymnasium_retinatask

env = gym.make("RetinaTask-v0", mode="batch", batch_size=100)
obs, info = env.reset()

episode_reward = 0
while True:
    action = env.action_space.sample()
    obs, reward, terminated, truncated, info = env.step(action)
    episode_reward += reward
    if terminated:
        break

print(f"Total reward: {episode_reward:.2f}")
env.close()

Perfect Agent (Baseline)

import gymnasium as gym
import numpy as np
from gymnasium_retinatask import RetinaPatterns

env = gym.make("RetinaTask-v0", mode="full_evaluation")
obs, info = env.reset()

episode_reward = 0
while True:
    # Get perfect classification
    pattern = info["pattern"]
    left, right = RetinaPatterns.get_labels(pattern)
    action = np.array([left, right], dtype=np.float32)

    obs, reward, terminated, truncated, info = env.step(action)
    episode_reward += reward
    if terminated:
        break

print(f"Accuracy: 100%")
print(f"Reward: {episode_reward:.2f}")  # Should be 1000.0
env.close()

Running Examples

The package includes several example scripts:

# Analyze pattern distribution
uv run python src/gymnasium_retinatask/examples/pattern_analysis.py

# Test with perfect agent (baseline)
uv run python src/gymnasium_retinatask/examples/perfect_agent.py

# Test with random agent
uv run python src/gymnasium_retinatask/examples/random_agent.py

Advanced Examples

The examples/ directory contains complete, working implementations using different ML frameworks:

NEAT Evolution

Evolve neural networks using NEAT (NeuroEvolution of Augmenting Topologies):

# Install NEAT dependencies
uv sync --group examples-neat

# Run NEAT evolution (50 generations)
uv run python examples/neat_evolution.py

This example demonstrates:

  • Configuring NEAT for the Retina Task
  • Evaluating genomes on all 256 patterns
  • Tracking evolution statistics across generations
  • Testing the best evolved network

Expected results: ~75-90% accuracy after 50 generations.

HyperNEAT Evolution

Evolve networks using HyperNEAT, which exploits the geometric structure of the retina:

# Install NEAT dependencies (same as above)
uv sync --group examples-neat

# Run HyperNEAT evolution (30 generations)
uv run python examples/hyperneat_evolution.py

HyperNEAT features:

  • CPPN (Compositional Pattern Producing Network) generates substrate weights
  • Substrate network matches the 2D retina geometry
  • Encourages modular solutions for left/right independence
  • Analyzes evolved network structure

The geometric substrate layout helps HyperNEAT discover modular solutions more efficiently than standard NEAT.

Development

Running Tests

uv run pytest src/gymnasium_retinatask/tests/ -v

Code Formatting

uv run black src/
uv run isort src/

Reference

This implementation is based on:

Risi, S., & Stanley, K. O. (2012). An enhanced hypercube-based encoding for evolving the placement, density, and connectivity of neurons. Artificial Life, 18(4), 331-363. doi: 10.1162/ARTL_a_00071

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gymnasium_retinatask-0.1.0.tar.gz (17.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gymnasium_retinatask-0.1.0-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file gymnasium_retinatask-0.1.0.tar.gz.

File metadata

  • Download URL: gymnasium_retinatask-0.1.0.tar.gz
  • Upload date:
  • Size: 17.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for gymnasium_retinatask-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9382d7e527a8ee3f20a2aa4b202a94d42ebcce2bf5ac0b982089faa72041c1e9
MD5 0749a10e4d04faa1038f1ad66dcbb927
BLAKE2b-256 9dee8a7bb4cb021be890073cd93a83c173f71c6a321db6f4af118aa78b12f6b9

See more details on using hashes here.

File details

Details for the file gymnasium_retinatask-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for gymnasium_retinatask-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6c1a827239355ddd3fd2919843e380ece43e35486ccb5186593d94505317b27e
MD5 5d78d1aeacafe409bd986a4b0fba9a54
BLAKE2b-256 3006c87fe6fc3e5fb356e435de2bbd5a8caa24dd94a90f9f3594bafa91fc2bb7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page