Skip to main content

Metrics for Lifelong Learning

Project description

Lifelong Learning Metrics (L2Metrics)

APL Logo

Table of Contents

Introduction

Lifelong Learning Metrics (L2Metrics) is a Python library containing foundational code for the L2M Metrics Framework. This framework includes the following:

  • Python libraries for processing performance logs generated by lifelong learning algorithms
  • Support for extending the framework with custom metrics

Metrics

The L2Metrics library supports the following lifelong learning metrics as defined in the Lifelong Learning Metrics for L2M specification:

  • Performance Recovery (PR)
  • Mean Evaluation Performance (MEP)
  • Mean Training Performance (MTP)
  • Performance Maintenance (PM)
  • Forward Transfer (FT)
  • Backward Transfer (BT)
  • Performance Relative to a Single-Task Expert (RP)
  • Sample Efficiency (SE)

Data Preprocessing

Refer to the Data Processing README for details on the data preprocessing methods in this library.

Requirements

L2Metrics is written in Python 3 and it is highly recommended to use at least version Python 3.6. The Metrics Framework has been tested on Windows 10 and Ubuntu 18.04/20.04. It should work on other platforms but has not been verified.

Installation

1. (Optional) Create a Python virtual environment

python -m venv <path_to_new_venv>

Activate the virtual environment as follows:

Linux:

source <path_to_new_venv>/bin/activate

Windows:

<path_to_new_venv>/Scripts/Activate.ps1

2. Update pip and wheel in your environment

pip install -U pip wheel

3. Clone the L2Logger and L2Metrics repositories

git clone https://github.com/lifelong-learning-systems/l2logger.git
git clone https://github.com/lifelong-learning-systems/l2metrics.git

4. Install the L2Logger and L2Metrics packages

pip install -e <path_to_l2logger>
pip install -e <path_to_l2metrics>

Usage

To calculate metrics on the performance of your system, you must first generate log files in accordance with the L2Logger format version 1.1. Please refer to the L2Logger documentation for more details on how to generate compatible logs.

Once these logs are generated, you'll need to store Single-Task Expert (STE) data and pass the log directories as command-line arguments in order to compute STE-related metrics. Several example files are included to get you started:

  • Example STE and LL log directories:
    • ./examples/ste_logs/ste_task1_1_run1/
    • ./examples/ste_logs/ste_task2_1_run1/
    • ./examples/ste_logs/ste_task3_1_run1/
    • ./examples/ste_logs/ste_task3_1_run2/
    • ./examples/ll_logs/multi_task/
  • Example settings.json file for configuring command-line arguments
  • Example data_range.json file to show how the user can specify task normalization ranges

Command-Line Execution

Refer to the Command-Line README for more information on how to run L2Metrics from the command line.

Storing Single-Task Expert Data

The following commands are examples of how to store STE data from the provided logs, run from the root L2Metrics directory:

python -m l2metrics -l examples/ste_task1_1_run1 -s w
python -m l2metrics -l examples/ste_task2_1_run1 -s w
python -m l2metrics -l examples/ste_task3_1_run1 -s w
python -m l2metrics -l examples/ste_task3_1_run2 -s a

The specified log data will be stored in the $L2DATA directory under the taskinfo subdirectory, where all single-task expert data is pickled and saved. The STE store mode specified in the first three example commands is w, which is "write" or "overwrite." This mode will create a new pickle file for the STE if one does not already exist; if there is already a file for the same task in the taskinfo location, it will be overwritten in this mode. The last example command used the append mode, a, which allows users to store multiple runs of STE data in the same pickle file. Then, the STE averaging method can be selected in the l2metrics module to modify how multiple STE runs are handled. Storing STE data assumes the provided log only contains data for a single task/variant.

Replace the log directory argument with logs for other STE tasks and repeat until all STE data is stored.

Clearing Single-Task Expert Data

If the user would like to clear the taskinfo subdirectory of all previously-stored STE data, run the following command:

python -m l2metrics.clear_ste

Generating Metrics Report

To generate a metrics plot and report with default settings, run the following command from the l2metrics/examples directory:

python -m l2metrics -l ./ll_logs/multi_task -p performance

The default output files are saved in the current working directory under results/ and defined below:

  • multi_task_data.feather: The log data DataFrame containing raw and preprocessed data.
  • multi_task_metrics.json: The lifetime and task-level metrics of the run.
  • multi_task_settings.json: The settings used to generate the metrics report.
  • multi_task_regime.tsv: The regime-level metrics of the run.
  • plots/multi_task_evaluation.png: Evaluation block point plots grouped by task labels (i.e., task variants appear on the same subplot).
  • plots/multi_task_learning.png: Plot showing smoothed, normalized learning curves for the lifetime.
  • plots/multi_task_raw.png: Plot showing raw training reward values with smoothed curve overlaid.
  • plots/multi_task_ste.png: Plot showing concatenated learning curves compared to stored STE runs.

If you wish to generate a metrics report with modified settings (e.g., disabling normalization or aggregating lifetime metrics with the mean operator), you can either modify the arguments on the command line or specify a JSON file containing the desired settings. The settings loaded from the JSON file will take precedence over any arguments specified on the command line.

python -m l2metrics -c settings.json

Lastly, if you wish to compute metrics on multiple lifetimes at once, assert the recursive flag on the command line. When the recursive flag is set, L2Metrics will scan the subdirectories for valid LL logs, calculate metrics, then save out a TSV and JSON file containing lifetime/task-level metrics for each discovered lifetime.

python -m l2metrics -l <path/to/directory/containing/multiple/runs> -R

Note: If you do not wish to provide a fully qualified path to your log directory, you may copy it to your $L2DATA/logs directory. This is the default location for logs generated using the TEF.

Log Data

Refer to the Log Data README for more information on how to interface with the raw and preprocessed log data from the scenario.

Output Settings File

If saving of L2Metrics settings is enabled, the framework will generate a JSON file containing the primary parameters used to calculate L2Metrics:

{
  "log_dir": "ll_logs\\multi_task",
  "perf_measure": "performance",
  "variant_mode": "aware",
  "ste_averaging_method": "metrics",
  "aggregation_method": "mean",
  "maintenance_method": "mrlep",
  "transfer_method": "ratio",
  "normalization_method": "task",
  "smoothing_method": "flat",
  "window_length": null,
  "clamp_outliers": false
}

Metrics and Metrics File

The metrics module will print the lifetime metrics to the console when it has successfully completed execution. The following table shows an example of a metrics report output:

perf_recovery avg_train_perf avg_eval_perf perf_maintenance_mrlep forward_transfer_ratio backward_transfer_ratio ste_rel_perf sample_efficiency
-2.00 83.82 78.52 3.86 12.63 1.08 1.11 0.91

If saving is enabled, the framework will also generate a JSON file containing lifetime and task-level metrics for the scenario. Please refer to the File Description README for more information on the format of this file.

Evaluation Plot

The resulting evaluation plot from example run should look like this:

Evaluation Plot

This figure shows point plots for all tasks in the lifetime grouped by task label (i.e., task variants are shown on the same subplot). The points are the mean values in the evaluation blocks and the lines extending from each point show the 95% confidence intervals. The x-axis represents the block number in the lifetime.

Performance Plot

The resulting learning plot from example run should look like this:

Performance Plot

This figure shows the pre-processed (smoothed, normalized, clamped, etc.) learning curves across the lifetime. The dashed lines in the plot show the slopes between each task's evaluation blocks.

Raw Performance Plot

The resulting raw performance plot from example run should look like this:

Raw Performance Plot

This figure shows the raw reward values from learning blocks with the smoothed curves overlaid in black. The values in this figure are not normalized, even if that option is enabled.

Performance Relative to STE Plot

The framework should also produce a performance relative to STE plot shown below, where the task performance curves are generated by concatenating all the training data from the scenario:

STE Plot

The vertical black dashed lines indicate the block boundaries where task performance was stitched together.

Custom Metrics

See documentation in the examples folder at examples/README.md for more details on how to implement custom metrics.

Changelog

See CHANGELOG.md for a list of notable changes to the project.

License

See LICENSE for license information.

Acknowledgements

Primary development of Lifelong Learning Metrics (L2Metrics) was funded by the DARPA Lifelong Learning Machines (L2M) Program.

© 2021-2022 The Johns Hopkins University Applied Physics Laboratory LLC

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

l2metrics-3.1.0.tar.gz (51.6 kB view details)

Uploaded Source

File details

Details for the file l2metrics-3.1.0.tar.gz.

File metadata

  • Download URL: l2metrics-3.1.0.tar.gz
  • Upload date:
  • Size: 51.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.26.0 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.63.1 importlib-metadata/4.8.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.3

File hashes

Hashes for l2metrics-3.1.0.tar.gz
Algorithm Hash digest
SHA256 829bd636bdcb0301c16afb2531724514cac25fc70589771cda034aa5882bea4b
MD5 89c69925db0ec0da744442071008d3de
BLAKE2b-256 03166c3a917a8028254ff58fddd99a78b45cee3c7f490a10b66ee249fe95b3e8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page