Skip to main content

CUDA implementation of power attention

Project description

This is the legacy CUDA kernel for Power Attention, refer to https://github.com/m-a-n-i-f-e-s-t/power-attention instead.

Build

Build a single wheel

./build_wheels.sh -t <torch version> -p <python version>

Build all wheels

./build_all_wheels.sh

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

power_attention_cuda-0.1.0.tar.gz (12.0 kB view details)

Uploaded Source

File details

Details for the file power_attention_cuda-0.1.0.tar.gz.

File metadata

  • Download URL: power_attention_cuda-0.1.0.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for power_attention_cuda-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e903e25acf2b644b2b181022a09d66adf54921807f694a422d812fa4f2e205ee
MD5 3b144224f39eaa408bf0859b2a76a60c
BLAKE2b-256 c628548e2c67c293613a898058253d937524803ff8e09ab72507439cff80b5e4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page