local-sftmx - Pytorch
Project description
LocalSoftmax
Local Softmax parallelize the softmax computation by splitting the tensor into smaller sub-tensors and applying the softmax function on each of these smaller tensors independently. In other words, we want to compute a "local" softmax on each chunk of the tensor, instead of on the entire tensor.
Appreciation
- Lucidrains
- Agorians
Install
pip install local-sftmx
Usage
import torch
from local_sfmx import local_softmax
tensor = torch.rand(10, 5)
result = local_softmax(tensor, 2)
print(result)
Algorithm
function LocalSoftmax(tensor, num_chunks):
split tensors into num_chunks
smaller tensors
for each smaller tensor:
apply standard softmax
concatenate the results
return concatenated tensor
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file local_sfmx-0.0.4.tar.gz
.
File metadata
- Download URL: local_sfmx-0.0.4.tar.gz
- Upload date:
- Size: 4.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.3.2 CPython/3.11.0 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e624b37aabce4a29d453dc113c41b7af09c0910b9a6f00bba1417de4905d2c09 |
|
MD5 | 1dbb59fd974c2d73146c6e6f113b3632 |
|
BLAKE2b-256 | 791140c77140d775f750eeaa0c66b2fa980c1c8e66c3cbd5eb7c1b3fa277c66d |
File details
Details for the file local_sfmx-0.0.4-py3-none-any.whl
.
File metadata
- Download URL: local_sfmx-0.0.4-py3-none-any.whl
- Upload date:
- Size: 4.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.3.2 CPython/3.11.0 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f951424a700eedaf780c2fd730ad71a0f4d1238aa0268600a7f70f4299a4370f |
|
MD5 | 7b488408a6488c4a9b337531ada2bf84 |
|
BLAKE2b-256 | ab64ea6bac3a2204ded329774b333720db8b69f05ba3a7323dcfa267e0d58b1e |