Skip to main content

Python utility for text embeddings in MLX.

Project description

MLX Embedding Models

Run text embeddings on your Apple Silicon GPU. Supports any BERT- or RoBERTa-based embedding model, with a curated registry of high-performing models that just work off the shelf.

Get started by installing from PyPI:

pip install mlx-embedding-models

Then get started in a few lines of code:

from mlx_embedding_models.embedding import EmbeddingModel
model = EmbeddingModel.from_registry("bge-small")
texts = [
    "isn't it nice to be inside such a fancy computer",
    "the horse raced past the barn fell"
]
embs = model.encode(texts)
print(embs.shape)
# 2, 384

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlx_embedding_models-0.0.8.tar.gz (10.9 kB view hashes)

Uploaded Source

Built Distribution

mlx_embedding_models-0.0.8-py3-none-any.whl (12.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page