Reusable mid-level building blocks for MLX — the missing layer between mlx.nn and full model implementations
Project description
mlx-arsenal
Low-level operations and reusable building blocks missing from MLX core — the toolbox you want when porting PyTorch models to Apple Silicon.
Install
pip install mlx-arsenal
Or directly from source:
pip install git+https://github.com/dgrauet/mlx-arsenal.git
Modules
| Module | Components | Replaces (PyTorch) |
|---|---|---|
mlx_arsenal.spatial |
interpolate_nearest, interpolate_3d, avg_pool1d, replicate_pad, upsample_nearest/bilinear, pixel_shuffle/unshuffle, patchify/unpatchify, PatchEmbed2d/3d |
F.interpolate, F.avg_pool1d, F.pad(mode="replicate"), F.pixel_shuffle |
mlx_arsenal.layout |
to_channels_last/first, channels_last ctx manager, convert_conv_weights, load_safetensors |
NCHW ↔ NHWC conversion, weight transposition |
mlx_arsenal.conv |
weight_norm, WeightNorm |
nn.utils.weight_norm |
mlx_arsenal.attention |
causal_mask, sliding_window_mask |
Attention mask creation |
mlx_arsenal.norm |
PixelNorm, ScaleNorm |
Custom normalization layers |
mlx_arsenal.encoding |
FourierEmbedder |
Sinusoidal positional encoding |
mlx_arsenal.moe |
MoEGate, MoELayer |
Top-k mixture-of-experts dispatch |
mlx_arsenal.rasterize |
rasterize_triangles, interpolate |
Differentiable triangle rasterization with Metal z-buffer |
mlx_arsenal.tiling |
tiled_process, temporal_slice_process |
Memory-efficient large tensor processing |
Quick start
from mlx_arsenal.spatial import interpolate_nearest, avg_pool1d, replicate_pad
from mlx_arsenal.layout import to_channels_last, convert_conv_weights
from mlx_arsenal.attention import causal_mask
# Resize a video tensor (B, D, H, W, C)
x_resized = interpolate_nearest(x, size=(8, 32, 32))
# Temporal pooling
pooled = avg_pool1d(temporal_features, kernel_size=2)
# Pad with edge replication (like F.pad mode="replicate")
padded = replicate_pad(x, [(0,0), (2,0), (1,1), (1,1), (0,0)])
# Convert PyTorch conv weights to MLX channels-last layout
mlx_weights = convert_conv_weights(pytorch_weights)
# Causal attention mask for autoregressive decoding
mask = causal_mask(seq_len=128, offset=kv_cache_len)
Requirements
- Python >= 3.10
- MLX >= 0.27.0
- Apple Silicon Mac
Development
pip install -e ".[dev]"
pytest tests/
License
Apache 2.0
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mlx_arsenal-0.1.0.tar.gz
(28.3 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mlx_arsenal-0.1.0.tar.gz.
File metadata
- Download URL: mlx_arsenal-0.1.0.tar.gz
- Upload date:
- Size: 28.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
61a2c6fd159b7aba41a09aa6eb3f70d7f9c30eeffb496fa59c30fd486a5a39db
|
|
| MD5 |
3af510c935a588794a0891e7677ca9d6
|
|
| BLAKE2b-256 |
3ba069efc15cff9cd43f4ae763ca80d9fbcf3bfac7cf969e65ecc900edead4be
|
File details
Details for the file mlx_arsenal-0.1.0-py3-none-any.whl.
File metadata
- Download URL: mlx_arsenal-0.1.0-py3-none-any.whl
- Upload date:
- Size: 27.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e38416c49df4dfdd37e0c426a879ec899fa5d185be195e329f9275a35a225ec
|
|
| MD5 |
9bed43f448a8799f5fc955b96ccd814b
|
|
| BLAKE2b-256 |
16c0ab5656127a20246eea10551b6e2aaba5260d51b676956f9a4ed2184ebcc7
|