Skip to main content

A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.

Project description

Banner

Newborn Embodied Turing Test

Benchmarking Virtual Agents in Controlled-Rearing Conditions

PyPI - Version Python Version from PEP 621 TOML GitHub License GitHub Issues or Pull Requests

Getting StartedDocumentationLab Website

The Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the Building a Mind Lab. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.

Below is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.

Digital Twin

How to Use this Repository

The NETT toolkit comprises three key components:

  1. Virtual Environment: A dynamic environment that serves as the habitat for virtual agents.
  2. Experimental Simulation Programs: Tools to initiate and conduct experiments within the virtual world.
  3. Data Visualization Programs: Utilities for analyzing and visualizing experiment outcomes.

Directory Structure

The directory structure of the code is as follows:

├── docs                          # Documentation and guides
├── examples
│   ├── notebooks                 # Jupyter Notebooks for examples
│      └── Getting Started.ipynb  # Introduction and setup notebook
│   └── run                       # Terminal script example
├── src/nett
│   ├── analysis                  # Analysis scripts
│   ├── body                      # Agent body configurations
│   ├── brain                     # Neural network models and learning algorithms
│   ├── environment               # Simulation environments
│   ├── utils                     # Utility functions
│   ├── nett.py                   # Main library script
│   └── __init__.py               # Package initialization
├── tests                         # Unit tests
├── mkdocs.yml                    # MkDocs configuration
├── pyproject.toml                # Project metadata
└── README.md                     # This README file

Getting Started

To begin benchmarking your first embodied agent with NETT, please be aware:

Important: The mlagents==1.0.0 dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.

Installation

  1. Virtual Environment Setup (Highly Recommended)

    Create and activate a virtual environment to avoid dependency conflicts.

    conda create -y -n nett_env python=3.10.12
    conda activate nett_env
    

    See here for detailed instructions.

  2. Install Prerequistes

    Install the needed versions of setuptools and pip:

    pip install setuptools==65.5.0 pip==21 wheel==0.38.4
    

    NOTE: This is a result of incompatibilities with the subdependency gym==0.21. More information about this issue can be found here

  3. Toolkit Installation

    Install the toolkit using pip.

    pip install nett-benchmarks
    

    NOTE:: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with gym==0.21 and numpy<=1.21.2.

Running a NETT

  1. Download or Create the Unity Executable

    Obtain a pre-made Unity executable from here. The executable is required to run the virtual environment.

  2. Import NETT Components

    Start by importing the NETT framework components - Brain, Body, and Environment, alongside the main NETT class.

    from nett import Brain, Body, Environment
    from nett import NETT
    
  3. Component Configuration:

  • Brain

    Configure the learning aspects, including the policy network (e.g. "CnnPolicy"), learning algorithm (e.g. "PPO"), the reward function, and the encoder.

    brain = Brain(policy="CnnPolicy", algorithm="PPO")
    

    To get a list of all available policies, algorithms, and encoders, run nett.list_policies(), nett.list_algorithms(), and nett.list_encoders() respectively.

  • Body

    Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.

    body = Body(type="basic", dvs=False, wrappers=None)
    

    Here, we do not pass any wrappers, letting information from the environment reach the brain "as is". Alternative body types (e.g. two-eyed, rag-doll) are planned in future updates.

  • Environment

    Create the simulation environment using the path to your Unity executable (see Step 1).

    environment = Environment(config="identityandview", executable_path="path/to/executable.x86_64")
    

    To get a list of all available configurations, run nett.list_configs().

  1. Run the Benchmarking

    Integrate all components into a NETT instance to facilitate experiment execution.

    benchmarks = NETT(brain=brain, body=body, environment=environment)
    

    The NETT instance has a .run() method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.

    job_sheet = benchmarks.run(output_dir="path/to/run/output/directory/", num_brains=5, trains_eps=10, test_eps=5)
    

    The run function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the batch_mode parameter to False.

  2. Check Status:

To see the status of the benchmark processes, use the .status() method:

benchmarks.status(job_sheet)

Running Standard Analysis

After running the experiments, the pipeline will generate a collection of datafiles in the defined output directory.

  1. Install R and dependencies

    To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages tidyverse, argparse, and scales installed. To install these packages, run the following command in R:

    install.packages(c("tidyverse", "argparse", "scales"))
    

    Alternatively, if you are having difficulty installing R on your system, you can install these using conda.

    conda install -y r r-tidyverse r-argparse r-scales
    
  2. Run the Analysis

    To run the analysis, use the analyze method of the NETT class. This method will generate a set of plots and tables based on the datafiles in the output directory.

    benchmarks.analyze(run_dir="path/to/run/output/directory/", output_dir="path/to/analysis/output/directory/")
    

Documentation

For a link to the full documentation, please visit here.

Experiment Configuration

More information related to details on the experiment can be found on following pages.

🔼 Back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nett-benchmarks-0.4.0.tar.gz (57.8 kB view details)

Uploaded Source

Built Distribution

nett_benchmarks-0.4.0-py3-none-any.whl (74.9 kB view details)

Uploaded Python 3

File details

Details for the file nett-benchmarks-0.4.0.tar.gz.

File metadata

  • Download URL: nett-benchmarks-0.4.0.tar.gz
  • Upload date:
  • Size: 57.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.8

File hashes

Hashes for nett-benchmarks-0.4.0.tar.gz
Algorithm Hash digest
SHA256 9953050d7d5780073e8ff1abf568ddba29f377fffd952ec54efed2a9b28c6382
MD5 fa011c1432e46be4134b4503c64f534c
BLAKE2b-256 e406cd083c61fd47fa88b4a9a332e58a803c996cc329173aaf84e96c986c15ed

See more details on using hashes here.

File details

Details for the file nett_benchmarks-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for nett_benchmarks-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ecdaf94f8aaeadf7950958e381cab3437d4be7ed703b8d94cbd2d7af1507a036
MD5 07d8fb40f3e515b0fa369c425ea65c05
BLAKE2b-256 8ead60b81d5cd4d87278d268f62e9dd13211711457fbf0c02b7187c1722e6d3d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page