Python utility for text embeddings in MLX.
Project description
MLX Embedding Models
Run text embeddings on your Apple Silicon GPU. Supports any BERT- or RoBERTa-based embedding model, with a curated registry of high-performing models that just work off the shelf.
Get started by installing from PyPI:
pip install mlx-embedding-models
Then get started in a few lines of code:
from mlx_embedding_models.embedding import EmbeddingModel
model = EmbeddingModel.from_registry("bge-small")
texts = [
"isn't it nice to be inside such a fancy computer",
"the horse raced past the barn fell"
]
embs = model.encode(texts)
print(embs.shape)
# 2, 384
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mlx_embedding_models-0.0.9.tar.gz
(10.3 kB
view hashes)
Built Distribution
Close
Hashes for mlx_embedding_models-0.0.9.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 828a2f1751e28da718e114ef4d46b8a642973119e1a73b10c078e7521408d76d |
|
MD5 | 4caccf850d10cefa4d00f52dc47ff74b |
|
BLAKE2b-256 | c7229ce23b10b103cad6c220cebf789a6d5be8d46cd577d39f49192f840700c2 |
Close
Hashes for mlx_embedding_models-0.0.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b1453071eed8b6aea279935ed66e68e5adfa6eb974b6307667c636032c6696dd |
|
MD5 | e0eb1a86108f5dace358c44211cbd4ae |
|
BLAKE2b-256 | df4f0f634d540ec33d275655a12298a8b14520ea66aaae173643f8d0c934ca1e |