Python utility for text embeddings in MLX.
Project description
MLX Embedding Models
Run text embeddings on your Apple Silicon GPU. Supports any BERT- or RoBERTa-based embedding model, with a curated registry of high-performing models that just work off the shelf.
Get started by installing from PyPI:
pip install mlx-embedding-models
Then get started in a few lines of code:
from mlx_embedding_models.embedding import EmbeddingModel
model = EmbeddingModel.from_registry("bge-small")
texts = [
"isn't it nice to be inside such a fancy computer",
"the horse raced past the barn fell"
]
embs = model.encode(texts)
print(embs.shape)
# 2, 384
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mlx_embedding_models-0.0.8.tar.gz
(10.9 kB
view hashes)
Built Distribution
Close
Hashes for mlx_embedding_models-0.0.8.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4edb7538ed44c6f13005f88a3f30cf3f013959098355ba18e666e236971d2683 |
|
MD5 | d30f2bc69b628809e3434583b27d15c9 |
|
BLAKE2b-256 | bc48c4561f87e8b896f6a9e0256648a7e054591ac6da64c700b20ce7b87e423c |
Close
Hashes for mlx_embedding_models-0.0.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 625581d2ca804ef782e92edd173944d597884d62ecbf78007dc1b0dd280ee4e1 |
|
MD5 | b2a207be110b9d7cf3641f56f27549a1 |
|
BLAKE2b-256 | 3b04aab347a3c5a02b2b5843386157272dc49504992e99f12ecdba87a18f7ca8 |