Skip to main content

C++ extension for generating random graphs

Project description

MLX-Cluster

High-performance graph algorithms optimized for Apple's MLX framework, featuring random walks, biased random walks, and neighbor sampling.

PyPI version License: MIT Python 3.8+

Documentation | Quickstart |

Features

  • ** MLX Optimized**: Built specifically for Apple's MLX framework with GPU acceleration
  • ** High Performance**: Optimized C++ implementations with Metal shaders for Apple Silicon
  • ** Graph Algorithms**:
    • Uniform random walks
    • Biased random walks (Node2Vec style with p/q parameters)
    • Multi-hop neighbor sampling (GraphSAGE style)

Installation

From PyPI (Recommended)

pip install mlx-cluster

From Source

git clone https://github.com/vinayhpandya/mlx_cluster.git
cd mlx_cluster
pip install -e .

Development Installation

git clone https://github.com/vinayhpandya/mlx_cluster.git
cd mlx_cluster
pip install -e . --verbose

Dependencies

Required:

  • Python 3.8+
  • MLX framework
  • NumPy

Optional (for examples and testing):

  • MLX-Graphs
  • PyTorch (for dataset utilities)
  • pytest

Quick Start

Random Walks

import mlx.core as mx
import numpy as np
from mlx_cluster import random_walk
from mlx_graphs.datasets import PlanetoidDataset
from mlx_graphs.utils.sorting import sort_edge_index

# Load dataset
cora = PlanetoidDataset(name="cora")
edge_index = cora.graphs[0].edge_index.astype(mx.int64)

# Convert to CSR format
sorted_edge_index = sort_edge_index(edge_index=edge_index)
row = sorted_edge_index[0][0]
col = sorted_edge_index[0][1]
_, counts = np.unique(np.array(row, copy=False), return_counts=True)
row_ptr = mx.concatenate([mx.array([0]), mx.array(counts.cumsum())])

# Generate random walks
num_walks = 1000
walk_length = 10
start_nodes = mx.array(np.random.randint(0, cora.graphs[0].num_nodes, num_walks))
rand_values = mx.random.uniform(shape=[num_walks, walk_length])

mx.eval(rowptr,col, start_nodes, rand_values)
# Perform walks
node_sequences, edge_sequences = random_walk(
    row_ptr, col, start_nodes, rand_values, walk_length, stream=mx.gpu
)

print(f"Generated {num_walks} walks of length {walk_length + 1}")
print(f"Shape: {node_sequences.shape}")

Biased Random Walks (Node2Vec)

from mlx_cluster import rejection_sampling

# Biased random walks with p/q parameters
node_sequences, edge_sequences = rejection_sampling(
    row_ptr, col, start_nodes, walk_length,
    p=1.0,  # Return parameter
    q=2.0,  # In-out parameter
    stream=mx.gpu
)

Neighbor Sampling

from mlx_cluster import neighbor_sample

# Convert to CSC format (required for neighbor sampling)
def create_csc_format(edge_index, num_nodes):
    sources, targets = edge_index[0].tolist(), edge_index[1].tolist()
    edges = sorted(zip(sources, targets), key=lambda x: x[1])
    
    colptr = np.zeros(num_nodes + 1, dtype=np.int64)
    for _, target in edges:
        colptr[target + 1] += 1
    colptr = np.cumsum(colptr)
    
    sorted_sources = [source for source, _ in edges]
    return mx.array(colptr), mx.array(sorted_sources, dtype=mx.int64)

colptr, row = create_csc_format(edge_index, cora.graphs[0].num_nodes)

# Multi-hop neighbor sampling
input_nodes = mx.array([0, 1, 2], dtype=mx.int64)
num_neighbors = [10, 5]  # 10 neighbors in first hop, 5 in second
mx.eval(colptr, row, input_nodes)
samples, rows, cols, edges = neighbor_sample(
    colptr, row, input_nodes, num_neighbors,
    replace=True, directed=True
)

print(f"Sampled {len(samples)} nodes and {len(edges)} edges")

Documentation

For comprehensive documentation, examples, and API reference, visit: Documentation

Testing

Run the test suite:

# Install test dependencies
pip install pytest mlx-graphs torch

# Run tests
pytest -s -v

Performance

MLX-Cluster is optimized for Apple Silicon and shows significant speedups:

  • Apple M1/M2/M3: 2-5x faster than CPU-only implementations
  • GPU Acceleration: Automatic optimization for Metal Performance Shaders
  • Memory Efficient: Optimized sparse graph representations
  • Batch Processing: Efficient handling of thousands of concurrent walks

Contributing

We welcome contributions!

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/new-feature)
  3. Commit your changes (git commit -m 'Add new algorithm')
  4. Push to the branch (git push origin feature/new-feature)
  5. Open a Pull Request For installation and test intructions please visit the documentation

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

Citation

If you use MLX-Cluster in your research, please cite:

@software{mlx_cluster,
  author = {Vinay Pandya},
  title = {MLX-Cluster: High-Performance Graph Algorithms for Apple MLX},
  url = {https://github.com/vinayhpandya/mlx_cluster},
  version = {0.0.6},
  year = {2025}
}

Related Projects


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlx_cluster-0.0.7.tar.gz (20.5 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

mlx_cluster-0.0.7-cp313-cp313-macosx_14_0_universal2.whl (77.5 kB view details)

Uploaded CPython 3.13macOS 14.0+ universal2 (ARM64, x86-64)

mlx_cluster-0.0.7-cp312-cp312-macosx_14_0_universal2.whl (77.4 kB view details)

Uploaded CPython 3.12macOS 14.0+ universal2 (ARM64, x86-64)

mlx_cluster-0.0.7-cp311-cp311-macosx_14_0_universal2.whl (77.8 kB view details)

Uploaded CPython 3.11macOS 14.0+ universal2 (ARM64, x86-64)

mlx_cluster-0.0.7-cp310-cp310-macosx_14_0_universal2.whl (77.9 kB view details)

Uploaded CPython 3.10macOS 14.0+ universal2 (ARM64, x86-64)

mlx_cluster-0.0.7-cp39-cp39-macosx_14_0_universal2.whl (77.7 kB view details)

Uploaded CPython 3.9macOS 14.0+ universal2 (ARM64, x86-64)

File details

Details for the file mlx_cluster-0.0.7.tar.gz.

File metadata

  • Download URL: mlx_cluster-0.0.7.tar.gz
  • Upload date:
  • Size: 20.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mlx_cluster-0.0.7.tar.gz
Algorithm Hash digest
SHA256 5e7772541d44b9dad13a53e4dbe210d3f020d69fb64e4ca2e3e1d78b9d637c08
MD5 32e1b2c599d1032ee0ebdd8dbaf84810
BLAKE2b-256 9bedaaa4272bd1dcf1e79ef109bb4b757b5ddd6eb0b81d33b68240e351b83ad3

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7.tar.gz:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_cluster-0.0.7-cp313-cp313-macosx_14_0_universal2.whl.

File metadata

File hashes

Hashes for mlx_cluster-0.0.7-cp313-cp313-macosx_14_0_universal2.whl
Algorithm Hash digest
SHA256 c850f1d537baa8ddc7f3f52b45d27614f90034a60a8063d18d913c807e8c4dd1
MD5 f071de6fab0c5bb0c9ec384aa9bcb2f5
BLAKE2b-256 653837f3011ca5b88a955c487d1e19d975d5b120b7f73b3b3cb125457e81fb64

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7-cp313-cp313-macosx_14_0_universal2.whl:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_cluster-0.0.7-cp312-cp312-macosx_14_0_universal2.whl.

File metadata

File hashes

Hashes for mlx_cluster-0.0.7-cp312-cp312-macosx_14_0_universal2.whl
Algorithm Hash digest
SHA256 8cd79d06460d1ea234e2aeb493ea5cafe534b236da341ba77cc5e9766299c486
MD5 f1dd6b280bfed1e5a62ff1ed81f24d23
BLAKE2b-256 52f0f390d7e3e7298108de49c776be8483fe95eefccac1543091ae04d28d3708

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7-cp312-cp312-macosx_14_0_universal2.whl:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_cluster-0.0.7-cp311-cp311-macosx_14_0_universal2.whl.

File metadata

File hashes

Hashes for mlx_cluster-0.0.7-cp311-cp311-macosx_14_0_universal2.whl
Algorithm Hash digest
SHA256 7c5efc67564c569ef72f80ab4e401b150162a951fa70e376bb862189f3307a81
MD5 a9a2cf0a891d6c052d93ef8f2620520e
BLAKE2b-256 85e214f2d6a3438422c2a271d38d1fc7fae3972099db5308ebed901e774e4e14

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7-cp311-cp311-macosx_14_0_universal2.whl:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_cluster-0.0.7-cp310-cp310-macosx_14_0_universal2.whl.

File metadata

File hashes

Hashes for mlx_cluster-0.0.7-cp310-cp310-macosx_14_0_universal2.whl
Algorithm Hash digest
SHA256 51e6ea8ef80ce7777adba2abe94bb3d7876e8dd58969d1e790e79f8d38de9f32
MD5 781c8317244ac109cd93cf0811631f3f
BLAKE2b-256 297b8e3cdb5567fcd8b2bebf2c9e00b7f6d7d556345cd6de51bff315b90197a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7-cp310-cp310-macosx_14_0_universal2.whl:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_cluster-0.0.7-cp39-cp39-macosx_14_0_universal2.whl.

File metadata

File hashes

Hashes for mlx_cluster-0.0.7-cp39-cp39-macosx_14_0_universal2.whl
Algorithm Hash digest
SHA256 776e5016e4de83a09a27dd8fdd0c2f02d872668829fa8f1dcc0ed2e2f5ca3866
MD5 087b6ccff3bec2d08d1a398463b0581b
BLAKE2b-256 d373dbb5273ccd84401eb99d6ecdee7fa1199d04bc559bf678bd2056d4ddb4c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_cluster-0.0.7-cp39-cp39-macosx_14_0_universal2.whl:

Publisher: release.yml on vinayhpandya/mlx_cluster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page