Skip to main content

PyTorch implementations of binary and multi-class focal loss

Project description

pytorch-focalloss

The pytorch-focalloss package provides a PyTorch implementation of binary and multi-class focal loss.

Focal loss was first described in Lin et al.'s "Focal Loss for Dense Object Detection" (https://arxiv.org/abs/1708.02002).

Binary focal loss

This implementation of binary focal loss extends the original slightly, allowing for multi-label classification with no modification needed, including support for using a different value of $\alpha$ for each label by supplying a tensor of values.

It is built on top of PyTorch's BCEWithLogitsLoss class, and supports all of the same arguments. The pos_weight argument is given as alpha (but can alternatively be given as pos_weight for drop-in compatability with BCEWithLogitsLoss), and the reduction and weight arguments are the same as for BCEWithLogitsLoss.

Multi-class focal loss

The multi-class focal loss is a logical extension of the original binary focal loss to the N-class case. It similarly accepts a tensor of weights, with one for each class, as $\alpha$.

It is built on top of PyTorch's CrossEntropyLoss class, and supports all of the same arguments. The weight argument is given as alpha (but can alternatively be given as weight for drop-in compatability with CrossEntropyLoss), and the reduction, ignore_index, and label_smoothing arguments are the same as for CrossEntropyLoss.

Note that one difference from CrossEntropyLoss is that if all samples have target value ignore_index, then MultiClassFocalLoss returns 0 where CrossEntropyLoss would return nan.

Demo

See below or check out DEMO.ipynb above for a demonstration of how the binary and multi-class focal losses work and compare to the standard cross entropy versions.

from torch import float32, ones, randint, randn, tensor
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss

from torch_focalloss import BinaryFocalLoss, MultiClassFocalLoss

BinaryFocalLoss

BinaryFocalLoss for binary classification

We'll use the same inputs for the whole example to demonstrate how changes in parameters changes the loss value.

First we create our simulated batch of 5 binary labels and raw logits.

preds = randn(5)
target = randint(2, size=(5,), dtype=float32)
print("Logits: ", preds)
print("Target: ", target)
Logits:  tensor([-1.7464, -0.3476,  2.7578, -0.1100,  0.5949])
Target:  tensor([1., 1., 1., 0., 1.])

The normal binary cross entropy loss is the same as focal loss when $\gamma$ (gamma), which determines the strength of focus on difficult samples, is equal to $0$.

gamma = 0

bce = BCEWithLogitsLoss()
bfl = BinaryFocalLoss(gamma=gamma)

print(f"BCE Loss: {bce(preds, target).item():.5f}")
print(f"Focal Loss: {bfl(preds, target).item():.5f}")
BCE Loss: 0.78592
Focal Loss: 0.78592

This is also true when the weight applied to the positive class (1) relative to the negative class (0) is not 1. This parameter is called $\alpha$ (alpha) and is identical to the pos_weight parameter of the BCEWithLogits class, which is used to help manage class imbalance.

gamma = 0
alpha = 1.5

bce = BCEWithLogitsLoss(pos_weight=tensor(alpha))
bfl = BinaryFocalLoss(gamma=gamma, alpha=alpha)

print(f"BCE Loss: {bce(preds, target).item():.5f}")
print(f"Focal Loss: {bfl(preds, target).item():.5f}")
BCE Loss: 1.11491
Focal Loss: 1.11491

Note that our $\alpha$ is similar, but not identical, to the one in Lin et al.'s "Focal Loss for Dense Object Detection" (https://arxiv.org/abs/1708.02002). Both implementations use $\alpha$ as the weight for the positive class, but Lin et al. uses $(1-\alpha)$ as the weight for the negative class, whereas our implementation implicitly uses $1$ as the weight for the negative class. This means that Lin et al.'s $\alpha$ is constrained to $[0,1]$, but ours is unbounded. The formula $\alpha=\frac{\alpha_}{1-\alpha_}$, where $\alpha_$ is the $\alpha$ from Lin et al., converts between the two. However, to eliminate balancing and replicate the behavior for $\alpha=\mathbf{1}$ using the Lin et al. implementation, we must set $\alpha_=\frac{1}{2}$ and multiply the final loss by 2, which demonstrates that the conversion is not 1-to-1 when it comes to training behavior and requires reevaluation of the learning rate in particular, generally requiring lower learning rates in our implementation compared to Lin et al. for $\alpha>1$ and greater for $\alpha<1$.

Focal loss differs from binary cross entropy loss when $\gamma\neq0$. Technically, $\gamma$ can be less than $0$, but this would increase focus on easy samples and defocus hard samples, which is the opposite of why focal loss is effective. Thus, we will show what happens when $\gamma>0$.

gamma = 2

bce = BCEWithLogitsLoss()
bfl = BinaryFocalLoss(gamma=gamma)

print(f"BCE Loss: {bce(preds, target).item():.5f}")
print(f"Focal Loss: {bfl(preds, target).item():.5f}")
BCE Loss: 0.78592
Focal Loss: 0.37685

BinaryFocalLoss for multi-label classification.

Just like binary cross entropy loss, we can use our binary focal loss for multi-label classification without modification.

We will simulate a batch of 5 samples, each with 3 binary labels.

preds = randn(5, 3)
target = randint(2, size=(5, 3), dtype=float32)
print("Logits: \n", preds)
print("Target: \n", target)
Logits:
 tensor([[ 1.4674,  0.1618,  0.6060],
        [ 0.1765,  0.7799,  0.9048],
        [-0.5558,  1.5306, -0.4360],
        [ 0.8222,  0.0072,  0.7803],
        [ 1.1644,  0.3844,  0.4152]])
Target:
 tensor([[0., 1., 0.],
        [0., 1., 0.],
        [0., 1., 1.],
        [1., 1., 1.],
        [1., 0., 0.]])
gamma = 2
alpha = 1.5

bce = BCEWithLogitsLoss(pos_weight=tensor(alpha))
bfl = BinaryFocalLoss(gamma=gamma, alpha=alpha)

print(f"BCE Loss: {bce(preds, target).item():.5f}")
print(f"Focal Loss: {bfl(preds, target).item():.5f}")
BCE Loss: 0.85098
Focal Loss: 0.28560

When doing multi-label classification, you can also specify a value of $\alpha$ for each label by combining them in a tensor.

gamma = 2
alpha = tensor([0.5, 1, 1.5])

bce = BCEWithLogitsLoss(pos_weight=alpha)
bfl = BinaryFocalLoss(gamma=gamma, alpha=alpha)

print(f"BCE Loss: {bce(preds, target).item():.5f}")
print(f"Focal Loss: {bfl(preds, target).item():.5f}")
BCE Loss: 0.74597
Focal Loss: 0.27082

MultiClassFocalLoss

We also extended Lin et al.'s focal loss, which they only defined for the binary case, to the multiclass case.

Our example input will be for a 4-class classification problem, so we will create a sample of 5 labels and 5 sets of logits.

preds = randn(5, 4)
target = randint(4, size=(5,))
print("Logits: \n", preds)
print("Target: \n", target)
Logits:
 tensor([[-0.4086,  0.0477,  0.6028,  0.4822],
        [-0.7605, -0.2642,  0.2710,  1.3920],
        [-0.8921, -1.3226,  0.0824, -1.7038],
        [ 0.1170,  0.9550, -1.3464,  0.5218],
        [ 1.5800,  0.2870,  0.2940, -1.0333]])
Target:
 tensor([1, 2, 3, 0, 2])

Like binary focal loss and binary cross entropy loss, multi-class focal loss and cross entropy loss are the same when $\gamma=0$.

gamma = 0

cel = CrossEntropyLoss()
mcfl = MultiClassFocalLoss(gamma=gamma)

print(f"Cross Entropy Loss: {cel(preds, target).item():.5f}")
print(f"Multi-Class Focal Loss: {mcfl(preds, target).item():.5f}")
Cross Entropy Loss: 1.79243
Multi-Class Focal Loss: 1.79243

This is also true when we apply class balancing weights. We also call these $\alpha$, and they are identical to the "weight" argument of the CrossEntropyLoss class. Note that when using the reduction option "mean", the weighted mean is taken, which means that the sum is divided by the effective number of samples according to the class weights. This is the same behavior as for the standard CrossEntropyLoss class.

gamma = 0
alpha = (ones(4) + randn(4)).abs()
print(f"Alpha: {alpha}\n")

cel = CrossEntropyLoss(weight=alpha)
mcfl = MultiClassFocalLoss(gamma=gamma, alpha=alpha)

print(f"Cross Entropy Loss: {cel(preds, target).item():.5f}")
print(f"Multi-Class Focal Loss: {mcfl(preds, target).item():.5f}")
Alpha: tensor([0.3806, 2.1139, 0.1361, 2.1312])

Cross Entropy Loss: 1.93800
Multi-Class Focal Loss: 1.93800

As in the binary case, multi-class focal loss differs from cross entropy loss when $\gamma\neq0$. Again, we will only show what happens when $\gamma>0$.

gamma = 2

cel = CrossEntropyLoss(weight=alpha)
mcfl = MultiClassFocalLoss(gamma=gamma, alpha=alpha)

print(f"Cross Entropy Loss: {cel(preds, target).item():.5f}")
print(f"Multi-Class Focal Loss: {mcfl(preds, target).item():.5f}")
Cross Entropy Loss: 1.93800
Multi-Class Focal Loss: 1.42661

Info

Use the Issues section for questions, feedback, and concerns, or create a Pull Request if you want to contribute!

Thank you for checking out the torch_focalloss package!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_focalloss-0.1.0.tar.gz (10.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytorch_focalloss-0.1.0-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file pytorch_focalloss-0.1.0.tar.gz.

File metadata

  • Download URL: pytorch_focalloss-0.1.0.tar.gz
  • Upload date:
  • Size: 10.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for pytorch_focalloss-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5332256cfef178aabeb7c0e25028f0175aef2b8911018bbff1d8a90ba6f87290
MD5 19f2422995fbe4be256efa1040bd0f31
BLAKE2b-256 c27bd04bb1d1ad4e3a3673666e49d15b71812bf5c14e15918a4c59ddbff2da0f

See more details on using hashes here.

File details

Details for the file pytorch_focalloss-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytorch_focalloss-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 27df02e12134a33c71c951f4c15a21dc0e83125de51fe099516e6aa98bf88bf5
MD5 2353a0b980b5cd72b906aba6d6d0bdd6
BLAKE2b-256 53cba9f005bf5e7b3c73fb2e4c904e220150fc1c5dbbaa06e30147bb1c3a22b6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page