Skip to main content

A Continual Learning Framework for both Jax and PyTorch.

Project description

Sequel: A Continual Learning Library in PyTorch and JAX

The goal of this library is to provide a simple and easy to use framework for continual learning. The library is written in PyTorch and JAX and provides a simple interface to run experiments. The library is still in development and we are working on adding more algorithms and datasets.

Installation

The library can be installed via pip:

pip install sequel-core

Alternatively, you can install the library from source:

git clone https://github.com/nik-dim/sequel.git
python3 -m build

or use the library by cloning the repository. In order to use the library, you need to install the dependencies. This can be done via the requirements.txt file. We recommend to use a conda environment for this. The following commands will create a conda environment with the required packages and activate it:

# create the conda environment
conda create -n sequel -y python=3.10 cuda cudatoolkit cuda-nvcc -c nvidia -c anaconda
conda activate sequel 

# install all required packages
pip install -r requirements.txt

# Optional: Depending on the machine, the next command might be needed to enable CUDA support for GPUs
pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html

Run an experiment

For some examples, you can modify the example_pytorch.py and example_jax.py files, or run:

# example experiment on PyTorch
python example_pytorch.py

# ...and in JAX
python example_jax.py

Experiments are located in the examples/ directory in configs. In order to run an experiment you simply do the following:

python main.py +experiment=EXPERIMENT_DIR/EXPERRIMENT

# examples
python main.py +examples=ewc_rotatedmnist       mode=pytorch        # or mode=jax
python main.py +examples=mcsgd_rotatedmnist     mode=pytorch        # or mode=jax

In order to create your own experiment you follow the template of the experiments in configs/examples/. You override the defaults so that e.g. another algorithm is selected and you specify the training details. To run multiple experiments with different configs, the --multirun flag of Hydra can be used. For instance:

python main.py --multirun +examples=ewc_rotatedmnist \
     mode=pytorch optimizer.lr=0.01,0.001 \
     benchmark.batch_size=128,256 \ 
     training.epochs_per_task=1 # online setting

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sequel-core-0.0.2.tar.gz (59.8 kB view hashes)

Uploaded Source

Built Distribution

sequel_core-0.0.2-py3-none-any.whl (88.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page