Python utility for text embeddings in MLX.
Project description
MLX Embedding Models
Run text embeddings on your Apple Silicon GPU. Supports any BERT- or RoBERTa-based embedding model, with a curated registry of high-performing models that just work off the shelf.
Get started by installing from PyPI:
pip install mlx-embedding-models
Then get started in a few lines of code:
from mlx_embedding_models.embedding import EmbeddingModel
model = EmbeddingModel.from_registry("bge-small")
texts = [
"isn't it nice to be inside such a fancy computer",
"the horse raced past the barn fell"
]
embs = model.encode(texts)
print(embs.shape)
# 2, 384
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for mlx_embedding_models-0.0.7.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 00578a2b6623784596f88fdf9510b1f549ebd6eb0b9578f15a870beecd1c48f1 |
|
MD5 | 2f00eea3e195955674777c4acba78212 |
|
BLAKE2b-256 | a9ae426e0b2f7437d9b0938013806993dd8ee51adcef0798bfb65469ad803e80 |
Close
Hashes for mlx_embedding_models-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a244d90f41e4918fb97d7cff59565e4768e4e38b9d044022d2a340c1600f723 |
|
MD5 | 58c3de213e501e7ba0d2d9fbbceed02a |
|
BLAKE2b-256 | e4a270f994811c3e6fb83a8b0cccbfd22bfcc88e696c4a6f0efe746e3f07e2a0 |