Skip to main content

PoPE

Project description

PoPE-pytorch

Efficient implementation (and explorations) into polar coordinate positional embedding (PoPE) - from Gopalakrishnan et al. under Schmidhuber

Install

$ pip install PoPE-pytorch

Usage

import torch
from PoPE_pytorch import PoPE

# define pope

pope = PoPE(64, heads = 8)

# pass in sequence length

pos_emb = pope(1024)

# queries and keys in attention

q = torch.randn(1, 8, 1024, 64)
k = torch.randn(1, 8, 1024, 64)

# training

rotated_q, rotated_k = pope.apply_pope_to_qk(pos_emb, q, k)

# inference

rotated_q, rotated_k = pope.apply_pope_to_qk(pos_emb, q[..., -1:, :], k)

Axial PoPE

For images, video, etc. where multiple dimensions are needed, you can use AxialPoPE. The feature dimension will be split across these axial dimensions.

You can either pass in the positions manually, or just pass the dimensions as a tuple, in which case the grid positions will be automatically generated.

import torch
from PoPE_pytorch import AxialPoPE

# axial pope for images (e.g. 32x32)
# split 64 dim into 32 (x) and 32 (y)

pope = AxialPoPE(
    dim = 64,
    heads = 8,
    axial_dims = (32, 32)
)

pos_emb = pope((32, 32)) # (1024, 64) frequencies

# for video (e.g. 8 frames, 16x16 frames)
# split 96 dim into 32 (t), 32 (x), 32 (y)

pope_video = AxialPoPE(
    dim = 96,
    heads = 8,
    axial_dims = (32, 32, 32)
)

pos_emb_video = pope_video((8, 16, 16)) # (2048, 96) frequencies

# queries and keys
# then apply to q, k as usual

q = torch.randn(1, 8, 2048, 96)
k = torch.randn(1, 8, 2048, 96)

rotated_q, rotated_k = AxialPoPE.apply_pope_to_qk(pos_emb_video, q, k)

Fused Attention Similarity

import torch
from PoPE_pytorch import PoPE, compute_attn_similarity

# define pope

pope = PoPE(dim = 64, heads = 8).cuda()

# get rotations

pos_emb = pope(1024)

# queries and keys

q = torch.randn(1, 8, 1024, 64).cuda()
k = torch.randn(1, 8, 1024, 64).cuda()

# fused attention similarity, avoiding expanding 64 to 128

sim = compute_attn_similarity(q, k, pos_emb) # (1, 8, 1024, 1024)

attn = sim.softmax(dim = -1) # the usual in attention..

Fused Flash Attention

import torch
from PoPE_pytorch import PoPE, flash_attn_with_pope

# pope

pope = PoPE(dim = 32, heads = 8).cuda()

# queries, keys, values for attention

q = torch.randn(2, 8, 1024, 64).cuda()
k = torch.randn(2, 8, 1024, 64).cuda()
v = torch.randn(2, 8, 1024, 64).cuda()

pos_emb = pope(1024)

mask = torch.ones((2, 1024)).bool().cuda()

out = flash_attn_with_pope(q, k, v, pos_emb = pos_emb, causal = True, mask = mask)

assert out.shape == (2, 8, 1024, 64)

Citations

@misc{gopalakrishnan2025decouplingwhatwherepolar,
    title   = {Decoupling the "What" and "Where" With Polar Coordinate Positional Embeddings}, 
    author  = {Anand Gopalakrishnan and Robert Csordás and Jürgen Schmidhuber and Michael C. Mozer},
    year    = {2025},
    eprint  = {2509.10534},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG},
    url     = {https://arxiv.org/abs/2509.10534}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pope_pytorch-0.0.15.tar.gz (214.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pope_pytorch-0.0.15-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file pope_pytorch-0.0.15.tar.gz.

File metadata

  • Download URL: pope_pytorch-0.0.15.tar.gz
  • Upload date:
  • Size: 214.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for pope_pytorch-0.0.15.tar.gz
Algorithm Hash digest
SHA256 77b62f7ced9ca16386d9dbe920c7f8415df8a0c19d84a8377bf345fe7c94543a
MD5 8c4e37011359e652ae6c2c149b8fbe0c
BLAKE2b-256 4739307071593ea93f08e6bf8cd346109e45867ec52ec35b7b1f39b2a87c9649

See more details on using hashes here.

File details

Details for the file pope_pytorch-0.0.15-py3-none-any.whl.

File metadata

  • Download URL: pope_pytorch-0.0.15-py3-none-any.whl
  • Upload date:
  • Size: 15.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for pope_pytorch-0.0.15-py3-none-any.whl
Algorithm Hash digest
SHA256 1223a2fca83c5c1001d8540d0afebbe46497bfd2af04efc815a9bdb88c9c261d
MD5 2001912912b259118d1e188176442c4a
BLAKE2b-256 3d0e3552c4133b042799cea4bf7482ab856d502e90bf084ccf56eb79c5b2452f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page