Skip to main content

PyTorch Wrapper for CUDA Functions of Multi-Scale Deformable Attention

Project description

The author of this package has not provided a project description

Project details


Release history Release notifications | RSS feed

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

File details

Details for the file MultiScaleDeformableAttention-0.1-none-any.whl.

File metadata

  • Download URL: MultiScaleDeformableAttention-0.1-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags:
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.7.10

File hashes

Hashes for MultiScaleDeformableAttention-0.1-none-any.whl
Algorithm Hash digest
SHA256 152caec7860d1f39f644ac5eed946b5a4eecfad40764396345b3d0e516921b17
MD5 24c1b182492a4eaffb4341c696969b9d
BLAKE2b-256 35240b9b66dba170611430b5afa8c1da2f6b5da1ac0b2aa6831e2e47cfc28f45

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page