Skip to main content

Efficient implementations of the Object Condensation losses (Jan Kieseler, 2020)

Project description

Object Condensation Loss Functions and Utilities

Actions Status Documentation Status PyPI version PyPI platforms

The Object Condensation loss - developed by Jan Kieseler - is now being used by several groups in high energy physics for both track reconstruction and shower reconstruction in calorimeters.

Several implementations of this idea already exist, but often they are maintained by very few people. This repository aims to provide an easy to use implementation for both the TensorFlow and PyTorch backend.

Existing Implementations:

Installation

python3 -m pip install -e 'object_condensation[pytorch]'
# or
python3 -m pip install -e 'object_condensation[tensorflow]'

Development setup

For the development setup, clone this repository and also add the dev and testing extra options, e.g.,

python3 -m pip install -e '.[pytorch,dev,testing]'

Please also install pre-commit:

python3 -m pip install pre-commit
pre-commit install  # in top-level directory of repository

Conventions

Implementations

[!NOTE] For a comparison of the performance of the different implementations, see the docs.

Default

condensation_loss is a straightforward implementation that is easy to read and to verify. It is used as baseline.

Tiger

condensation_loss_tiger saves memory by "masking instead of multiplying". Consider the repulsive loss: It is an aggregation of potentials between condensation points (CPs) and individual nodes. If these potentials are taken to be hinge losses relu(1-dist), then they vanish for most CP-node pairs (assuming a sufficiently well-trained model).

Compare now the following two implementation strategies (where dist is the CP-node distance matrix):

# Simplified by assuming that all points belong to repulsive potential
# Strategy 1
v_rep = sum(relu(1 - dist))
# Strategy 2 (tiger)
rep_mask = dist < 1
v_rep = sum((1 - dist)[rep_mask])

In strategy 1, pytorch will keep all elements of dist in memory for backpropagation (even though most of the relu-differentials will be 0). In strategy 2 (because the mask is detached from the computational graph), the number of elements to backpropagate with will be greatly reduced.

However, there is still one problem: What if our latent space collapses at some point (or at the beginning of the training)? This would result in batches with a greatly increased memory consumption, possibly crashing the run. To counter this, an additional parameter max_n_rep is introduced. If the number of repulsive pairs (rep_mask.sum() in the example above) exceeds max_n_rep, then rep_mask will sample max_n_rep elements and upweight them by n_rep/max_n_rep. To check whether this approximation is being used, condensation_loss_tiger will return n_rep in addition to the losses.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

object_condensation-1.1.0.tar.gz (15.3 kB view hashes)

Uploaded Source

Built Distribution

object_condensation-1.1.0-py3-none-any.whl (8.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page