Python utility for text embeddings in MLX.
Project description
MLX Embedding Models
Run text embeddings on your Apple Silicon GPU. Supports any BERT- or RoBERTa-based embedding model, with a curated registry of high-performing models that just work off the shelf.
Get started by installing from PyPI:
pip install mlx-embedding-models
Then get started in a few lines of code:
from mlx_embedding_models.embedding import EmbeddingModel
model = EmbeddingModel.from_registry("bge-small")
texts = [
"isn't it nice to be inside such a fancy computer",
"the horse raced past the barn fell"
]
embs = model.encode(texts)
print(embs.shape)
# 2, 384
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for mlx_embedding_models-0.0.11.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9dcb01462736cca9e9014a032f5df2fb58cad6f1a9d6f70e321103cb67cb136c |
|
MD5 | 8b5cf8b2481af87f4c0eb1a740946aa9 |
|
BLAKE2b-256 | a409ff384fc20f602dca2877b823bff15dbec2259edab146bd18992d3e388ebc |
Close
Hashes for mlx_embedding_models-0.0.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 518c8d02773bf8afadc832e41ae473bd7433d609084b4930f3ce6ee47fbc6a6e |
|
MD5 | 9662e9a2e219ccff8b626e6e96d78c1f |
|
BLAKE2b-256 | 8e4f5c09805363daa711368c6c3b86d1a24839b2512f3e75406fe4d75cd16f33 |