Skip to main content

Metrics for Lifelong Learning

Project description

Lifelong Learning Metrics (L2Metrics)

logo

Table of Contents

Introduction

Lifelong Learning Metrics (L2Metrics) is a Python library containing foundational code for the L2M Metrics Framework. This framework includes the following:

  • Python libraries for processing performance logs generated by lifelong learning algorithms
  • Support for extending the framework with custom metrics

Metrics

The L2Metrics library supports the following lifelong learning metrics as defined in the Lifelong Learning Metrics for L2M specification:

  • Performance Recovery (PR)
  • Performance Maintenance (PM)
  • Forward Transfer (FT)
  • Backward Transfer (BT)
  • Performance Relative to a Single-Task Expert (RP)
  • Sample Efficiency (SE)

Data Preprocessing

Refer to the Data Processing README for details on the data preprocessing methods in this library.

Requirements

L2Metrics is written in Python 3 and it is highly recommended to use at least version Python 3.6. The Metrics Framework has been tested on Windows 10 and Ubuntu 18.04/20.04. It should work on other platforms but has not been verified.

Installation

1. (Optional) Create a Python virtual environment

python -m venv <path_to_new_venv>

Activate the virtual environment as follows:

Linux:

source <path_to_new_venv>/bin/activate

Windows:

<path_to_new_venv>/Scripts/Activate.ps1

2. Update pip and wheel in your environment

pip install -U pip wheel

3. Clone the L2Logger and L2Metrics repositories

git clone https://github.com/darpa-l2m/l2logger.git
git clone https://github.com/darpa-l2m/l2metrics.git

4. Install the L2Logger and L2Metrics packages

pip install -e <path_to_l2logger>
pip install -e <path_to_l2metrics>

Usage

To calculate metrics on the performance of your system, you must first generate log files in accordance with the L2Logger format version 1.1. Please refer to the L2Logger documentation for more details on how to generate compatible logs.

Once these logs are generated, you'll need to store Single-Task Expert (STE) data and pass the log directories as command-line arguments to compute STE-related metrics. Several example files are included to get you started:

  • Example STE and LL log directories:
    • ./examples/ste_task1_1_run1/ (STE)
    • ./examples/ste_task2_1_run1/ (STE)
    • ./examples/ste_task3_1_run1/ (STE)
    • ./examples/ste_task3_1_run2/ (STE)
    • ./examples/multi_task/ (LL)
  • Example settings.json file for configuring command-line arguments
  • Example data_range.json file to show how the user can specify task normalization ranges

Command-Line Execution

Refer to the Command-Line README for more information on how to run L2Metrics from the command line.

Storing Single-Task Expert Data

The following commands are examples of how to store STE data from the provided logs, run from the root L2Metrics directory:

python -m l2metrics -l examples/ste_task1_1_run1 -s w
python -m l2metrics -l examples/ste_task2_1_run1 -s w
python -m l2metrics -l examples/ste_task3_1_run1 -s w
python -m l2metrics -l examples/ste_task3_1_run2 -s a

The specified log data will be stored in the $L2DATA directory under the taskinfo subdirectory, where all single-task expert data is pickled and saved. The STE store mode specified in the first three example commands is w, which is "write" or "overwrite." This mode will create a new pickle file for the STE if one does not already exist; if there is already a file for the same task in the taskinfo location, it will be overwritten in this mode. The last example command used the append mode, a, which allows users to store multiple runs of STE data in the same pickle file. Then, the STE averaging method can be selected in the l2metrics module to modify how multiple STE runs are handled. Storing STE data assumes the provided log only contains data for a single task/variant.

Replace the log directory argument with logs for other STE tasks and repeat until all STE data is stored.

Clearing Single-Task Expert Data

If the user would like to clear the taskinfo subdirectory of all previously-stored STE data, run the following command:

python -m l2metrics.clear_ste

Generating Metrics Report

To generate a metrics plot and report with default settings, run the following command from the l2metrics/examples directory:

python -m l2metrics -l ./multi_task -p performance

The default output files are saved in the current working directory and defined below:

  • multi_task_data.feather: The log data DataFrame containing raw and preprocessed data.
  • multi_task_metrics.json: The lifetime and task-level metrics of the run.
  • multi_task_settings.json: The settings used to generate the metrics report.
  • multi_task_block.png: The block plot with separate subplots for evaluation blocks.
  • multi_task_perf.png: The performance plot.
  • multi_task_ste.png: The performance relative to STE plot.

If you wish to generate a metrics report with modified settings (e.g., disabling normalization or aggregating lifetime metrics with the mean operator), you can either modify the arguments on the command line or specify a JSON file containing the desired settings. The settings loaded from the JSON file will take precedence over any arguments specified on the command line.

python -m l2metrics -c settings.json

Lastly, if you wish to compute metrics on multiple lifetimes at once, assert the recursive flag on the command line. When the recursive flag is set, L2Metrics will scan the subdirectories for valid LL logs, calculate metrics, then save out a TSV and JSON file containing lifetime/task-level metrics for each discovered lifetime.

python -m l2metrics -l <path/to/directory/containing/multiple/runs> -R

Note: If you do not wish to provide a fully qualified path to your log directory, you may copy it to your $L2DATA/logs directory. This is the default location for logs generated using the TEF.

Log Data

Refer to the Log Data README for more information on how to interface with the raw and preprocessed log data from the scenario.

Output Settings File

If saving of L2Metrics settings is enabled, the framework will generate a JSON file containing the primary parameters used to calculate L2Metrics:

{
  "log_dir": "multi_task",
  "perf_measure": "performance",
  "variant_mode": "aware",
  "ste_averaging_method": "metrics",
  "aggregation_method": "mean",
  "maintenance_method": "mrlep",
  "transfer_method": "ratio",
  "normalization_method": "task",
  "smoothing_method": "flat",
  "window_length": null,
  "clamp_outliers": false
}

Metrics and Metrics File

The metrics module will print the lifetime metrics to the console when it has successfully completed execution. The following table shows an example of a metrics report output:

perf_recovery perf_maintenance_mrlep forward_transfer_ratio backward_transfer_ratio ste_rel_perf sample_efficiency
-2.0 3.86 12.63 1.08 1.11 0.91

If saving is enabled, the framework will also generate a JSON file containing lifetime and task-level metrics for the scenario. Please refer to the File Description README for more information on the format of this file.

Block Plot

The resulting block plot from example run should look like this:

diagram

The plot separates learning/training experiences from evaluation experiences. The top subplot shows the raw training data with a smoothed black curve overlaid. The subsequent subplots show the evaluation data for each individual task with 25% and 75% quantile ranges.

Performance Plot

The output figure of performance over experiences should look like this:

diagram

The white areas represent blocks in which learning is occurring while the gray areas represent evaluation blocks. The dashed lines in the plot show the slopes between each task's evaluation blocks.

Note: The performance values shown in the evaluation blocks are an average over the whole block, resulting in a flat line for each task.

Performance Relative to STE plot

The framework should also produce a performance relative to STE plot shown below, where the task performance curves are generated by concatenating all the training data from the scenario:

diagram

The black dashed lines indicate the block boundaries where task performance was stitched together.

Custom Metrics

See documentation in the examples folder at examples/README.md for more details on how to implement custom metrics.

Changelog

See CHANGELOG.md for a list of notable changes to the project.

License

See LICENSE for license information.

Acknowledgements

Primary development of Lifelong Learning Metrics (L2Metrics) was funded by the DARPA Lifelong Learning Machines (L2M) Program.

© 2021-2022 The Johns Hopkins University Applied Physics Laboratory LLC

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

l2metrics-3.0.0.tar.gz (37.9 kB view details)

Uploaded Source

File details

Details for the file l2metrics-3.0.0.tar.gz.

File metadata

  • Download URL: l2metrics-3.0.0.tar.gz
  • Upload date:
  • Size: 37.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.10.1 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.3

File hashes

Hashes for l2metrics-3.0.0.tar.gz
Algorithm Hash digest
SHA256 a718e42d73120913dcaa2ed5824d9457252e07779fdc79bb88474ff123e0e5f6
MD5 d8ae9b085f8005f26d66f8cdd23138fc
BLAKE2b-256 747a26de87953082b9948301f0a51045a26b582a5291a39e02c8c769fcfae2cb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page