Skip to main content

A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.

Project description

Banner

Newborn Embodied Turing Test

Benchmarking Virtual Agents in Controlled-Rearing Conditions

PyPI - Version Python Version from PEP 621 TOML GitHub License GitHub Issues or Pull Requests

Getting StartedDocumentationLab Website

The Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the Building a Mind Lab. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.

Below is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.

Digital Twin

How to Use this Repository

The NETT toolkit comprises three key components:

  1. Virtual Environment: A dynamic environment that serves as the habitat for virtual agents.
  2. Experimental Simulation Programs: Tools to initiate and conduct experiments within the virtual world.
  3. Data Visualization Programs: Utilities for analyzing and visualizing experiment outcomes.

Directory Structure

The directory structure of the code is as follows:

├── docs                          # Documentation and guides
├── examples
│   ├── notebooks                 # Jupyter Notebooks for examples
│      └── Getting Started.ipynb  # Introduction and setup notebook
│   └── run                       # Terminal script example
├── src/nett
│   ├── analysis                  # Analysis scripts
│   ├── body                      # Agent body configurations
│   ├── brain                     # Neural network models and learning algorithms
│   ├── environment               # Simulation environments
│   ├── utils                     # Utility functions
│   ├── nett.py                   # Main library script
│   └── __init__.py               # Package initialization
├── tests                         # Unit tests
├── mkdocs.yml                    # MkDocs configuration
├── pyproject.toml                # Project metadata
└── README.md                     # This README file

Getting Started

To begin benchmarking your first embodied agent with NETT, please be aware:

Important: The mlagents==1.0.0 dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.

Installation

  1. Virtual Environment Setup (Highly Recommended)

    Create and activate a virtual environment to avoid dependency conflicts.

    conda create -y -n nett_env python=3.10.12
    conda activate nett_env
    

    See here for detailed instructions.

  2. Install Prerequistes

    Install the needed versions of setuptools and pip:

    pip install setuptools==65.5.0 pip==21 wheel==0.38.4
    

    NOTE: This is a result of incompatibilities with the subdependency gym==0.21. More information about this issue can be found here

  3. Toolkit Installation

    Install the toolkit using pip.

    pip install nett-benchmarks
    

    NOTE:: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with gym==0.21 and numpy<=1.21.2.

Running a NETT

  1. Download or Create the Unity Executable

    Obtain a pre-made Unity executable from here. The executable is required to run the virtual environment.

  2. Import NETT Components

    Start by importing the NETT framework components - Brain, Body, and Environment, alongside the main NETT class.

    from nett import Brain, Body, Environment
    from nett import NETT
    
  3. Component Configuration:

  • Brain

    Configure the learning aspects, including the policy network (e.g. "CnnPolicy"), learning algorithm (e.g. "PPO"), the reward function, and the encoder.

    brain = Brain(policy="CnnPolicy", algorithm="PPO")
    

    To get a list of all available policies, algorithms, and encoders, run nett.list_policies(), nett.list_algorithms(), and nett.list_encoders() respectively.

  • Body

    Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.

    body = Body(type="basic", dvs=False, wrappers=None)
    

    Here, we do not pass any wrappers, letting information from the environment reach the brain "as is". Alternative body types (e.g. two-eyed, rag-doll) are planned in future updates.

  • Environment

    Create the simulation environment using the path to your Unity executable (see Step 1).

    environment = Environment(config="identityandview", executable_path="path/to/executable.x86_64")
    

    To get a list of all available configurations, run nett.list_configs().

  1. Run the Benchmarking

    Integrate all components into a NETT instance to facilitate experiment execution.

    benchmarks = NETT(brain=brain, body=body, environment=environment)
    

    The NETT instance has a .run() method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.

    job_sheet = benchmarks.run(output_dir="path/to/run/output/directory/", num_brains=5, trains_eps=10, test_eps=5)
    

    The run function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the batch_mode parameter to False.

  2. Check Status:

To see the status of the benchmark processes, use the .status() method:

benchmarks.status(job_sheet)

Running Standard Analysis

After running the experiments, the pipeline will generate a collection of datafiles in the defined output directory.

  1. Install R and dependencies

    To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages tidyverse, argparse, and scales installed. To install these packages, run the following command in R:

    install.packages(c("tidyverse", "argparse", "scales"))
    

    Alternatively, if you are having difficulty installing R on your system, you can install these using conda.

    conda install -y r r-tidyverse r-argparse r-scales
    
  2. Run the Analysis

    To run the analysis, use the analyze method of the NETT class. This method will generate a set of plots and tables based on the datafiles in the output directory.

    benchmarks.analyze(run_dir="path/to/run/output/directory/", output_dir="path/to/analysis/output/directory/")
    

Documentation

For a link to the full documentation, please visit here.

Experiment Configuration

More information related to details on the experiment can be found on following pages.

🔼 Back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nett-benchmarks-0.4.1.tar.gz (57.9 kB view details)

Uploaded Source

Built Distribution

nett_benchmarks-0.4.1-py3-none-any.whl (74.9 kB view details)

Uploaded Python 3

File details

Details for the file nett-benchmarks-0.4.1.tar.gz.

File metadata

  • Download URL: nett-benchmarks-0.4.1.tar.gz
  • Upload date:
  • Size: 57.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.13

File hashes

Hashes for nett-benchmarks-0.4.1.tar.gz
Algorithm Hash digest
SHA256 042781d6e4503c0dbb32c0f073f02c3c5516f8423959109717cdae96b5663d77
MD5 fe2ace9cb55982d1f93e95b32880a325
BLAKE2b-256 fb9fea17337038409c7065edca75a71cad75c6a00cbe9e65a7784ea4262d237d

See more details on using hashes here.

File details

Details for the file nett_benchmarks-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for nett_benchmarks-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 dc08c1ab1d56fe4914fe8116800547a18c2bc3d2e4085b7c8c86c7985f9bd59b
MD5 32ae655bb1a9b43fef9c3e0222b5d025
BLAKE2b-256 2b285145cef8d9c558b57148e653b17e3f4c566681516ca22946f2d6833a4621

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page