Skip to main content

local-sftmx - Pytorch

Project description

Multi-Modality

LocalSoftmax

Local Softmax parallelize the softmax computation by splitting the tensor into smaller sub-tensors and applying the softmax function on each of these smaller tensors independently. In other words, we want to compute a "local" softmax on each chunk of the tensor, instead of on the entire tensor.

Appreciation

  • Lucidrains
  • Agorians

Install

pip install local-sftmx

Usage

import torch
from local_sfmx import local_softmax

tensor = torch.rand(10, 5)
result = local_softmax(tensor, 2)
print(result)

Algorithm

function LocalSoftmax(tensor, num_chunks): split tensors into num_chunks smaller tensors for each smaller tensor: apply standard softmax concatenate the results return concatenated tensor

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local_sfmx-0.0.4.tar.gz (4.6 kB view hashes)

Uploaded Source

Built Distribution

local_sfmx-0.0.4-py3-none-any.whl (4.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page