Skip to main content

Embed anything at lightning speed

Project description

Downloads gpu Open in Colab roadmap MkDocs

Inference, Ingestion, and Indexing in Rust 🦀
Python docs »
Rust docs »
Benchmarks · FAQ · Adapters . Collaborations

EmbedAnything is a minimalist, yet highly performant, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX, model2vec and late-interaction embeddings, offering flexibility for a wide range of use cases.

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. How to add custom model and chunk size

🚀 Key Features

  • Candle Backend : Supports BERT, Jina, ColPali, Splade, ModernBERT
  • ONNX Backend: Supports BERT, Jina, ColPali, ColBERT Splade, Reranker, ModernBERT
  • Cloud Embedding Models:: Supports OpenAI and Cohere.
  • MultiModality : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
  • Rust : All the file processing is done in rust for speed and efficiency
  • GPU support : We have taken care of hardware acceleration on GPU as well.
  • Python Interface: Packaged as a Python library for seamless integration into your existing projects.
  • Vector Streaming: Continuously create and stream embeddings if you have low resource.
  • No Dependency on Pytorch Easy to deploy on cloud, as it comes with low memory footprint.

💡What is Vector Streaming

Vector Streaming enables you to process and generate embeddings for files and stream them, so if you have 10 GB of file, it can continuously generate embeddings Chunk by Chunk, that you can segment semantically, and store them in the vector database of your choice, Thus it eliminates bulk embeddings storage on RAM at once.

The embedding process happens separetly from the main process, so as to maintain high performance enabled by rust MPSC, and no memory leak as embeddings are directly saved to vector database. Find our blog.

EmbedAnythingXWeaviate

🦀 Why Embed Anything

➡️Faster execution.
➡️No Pytorch Dependency, thus low-memory footprint and easy to deploy on cloud.
➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages
➡️True multithreading
➡️Running embedding models locally and efficiently
➡️Candle allows inferences on CUDA-enabled GPUs right out of the box.
➡️Decrease the memory usage.
➡️Supports range of models, Dense, Sparse, Late-interaction, ReRanker, ModernBert.

🍓 Our Past Collaborations:

We have collaborated with reputed enterprise like Elastic, Weaviate, SingleStore Milvus and Analytics Vidya Datahours

You can get in touch with us for further collaborations.

Benchmarks

Only measures embedding model inference speed, on onnx-runtime. Code

⭐ Supported Models

We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.

How to add custom model on candle: from_pretrained_hf

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, model_id="model link from huggingface"
)
config = TextEmbedConfig(chunk_size=1000, batch_size=32)
data = embed_anything.embed_file("file_address", embedder=model, config=config)
Model HF link
Jina Jina Models
Bert All Bert based models
CLIP openai/clip-*
Whisper OpenAI Whisper models
ColPali starlight-ai/colpali-v1.2-merged-onnx
Colbert answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more
Splade Splade Models and other Splade like models
Reranker Jina Reranker Models, Xenova/bge-reranker
Model2Vec model2vec, minishlab/potion-base-8M
Qwen3-Embedding Qwen/Qwen3-Embedding-0.6B

Splade Models:

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.SparseBert, "prithivida/Splade_PP_en_v1"
)

ONNX-Runtime: from_pretrained_onnx

BERT

model = EmbeddingModel.from_pretrained_onnx(
  WhichModel.Bert, model_id="onnx_model_link"
)

ColPali

model: ColpaliModel = ColpaliModel.from_pretrained_onnx("starlight-ai/colpali-v1.2-merged-onnx", None)

Colbert

sentences = [
"The quick brown fox jumps over the lazy dog", 
"The cat is sleeping on the mat", "The dog is barking at the moon", 
"I love pizza", 
"The dog is sitting in the park"]

model = ColbertModel.from_pretrained_onnx("jinaai/jina-colbert-v2", path_in_repo="onnx/model.onnx")
embeddings = model.embed(sentences, batch_size=2)

ModernBERT

model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, ONNXModel.ModernBERTBase, dtype = Dtype.Q4F16
)

ReRankers

reranker = Reranker.from_pretrained("jinaai/jina-reranker-v1-turbo-en", dtype=Dtype.F16)

results: list[RerankerResult] = reranker.rerank(["What is the capital of France?"], ["France is a country in Europe.", "Paris is the capital of France."], 2)

Embed 4

# Initialize the model once
model: EmbeddingModel = EmbeddingModel.from_pretrained_cloud(
    WhichModel.CohereVision, model_id="embed-v4.0"
)

Qwen 3 - Embedding

# Initialize the model once
model:EmbeddingModel = EmbeddingModel.from_pretrained_hf(
    WhichModel.Qwen3, model_id="Qwen/Qwen3-Embedding-0.6B"
)

For Semantic Chunking

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# with semantic encoder
semantic_encoder = EmbeddingModel.from_pretrained_hf(WhichModel.Jina, model_id = "jinaai/jina-embeddings-v2-small-en")
config = TextEmbedConfig(chunk_size=1000, batch_size=32, splitting_strategy = "semantic", semantic_encoder=semantic_encoder)

For late-chunking

config = TextEmbedConfig(
    chunk_size=1000,
    batch_size=8,
    splitting_strategy="sentence",
    late_chunking=True,
)

# Embed a single file
data: list[EmbedData] = model.embed_file("test_files/attention.pdf", config=config)

🧑‍🚀 Getting Started

💚 Installation

pip install embed-anything

For GPUs and using special models like ColPali

pip install embed-anything-gpu

🚧❌ If it shows cuda error while running on windowns, run the following command:

os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin")

Usage

➡️ Usage For 0.3 and later version

To use local embedding: we support Bert and Jina

model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert, model_id="Hugging_face_link"
)
data = embed_anything.embed_file("test_files/test.pdf", embedder=model)

For multimodal embedding: we support CLIP

Requirements Directory with pictures you want to search for example we have test_files with images of cat, dogs etc

import embed_anything
from embed_anything import EmbedData
model = embed_anything.EmbeddingModel.from_pretrained_local(
    embed_anything.WhichModel.Clip,
    model_id="openai/clip-vit-base-patch16",
    # revision="refs/pr/15",
)
data: list[EmbedData] = embed_anything.embed_image_directory("test_files", embedder=model)
embeddings = np.array([data.embedding for data in data])
query = ["Photo of a monkey?"]
query_embedding = np.array(
    embed_anything.embed_query(query, embedder=model)[0].embedding
)
similarities = np.dot(embeddings, query_embedding)
max_index = np.argmax(similarities)
Image.open(data[max_index].text).show()

Using ONNX Models

To use ONNX models, you can either use the ONNXModel enum or the model_id from the Hugging Face model.

model = EmbeddingModel.from_pretrained_onnx(
  WhichModel.Bert, model_name = ONNXModel.AllMiniLML6V2Q
)

For some models, you can also specify the dtype to use for the model.

model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, ONNXModel.ModernBERTBase, dtype = Dtype.Q4F16
)

Using the above method is best to ensure that the model works correctly as these models are tested. But if you want to use other models, like finetuned models, you can use the hf_model_id and path_in_repo to load the model like below.

model = EmbeddingModel.from_pretrained_onnx(
  WhichModel.Jina, hf_model_id = "jinaai/jina-embeddings-v2-small-en", path_in_repo="model.onnx"
)

To see all the ONNX models supported with model_name, see here

⁉️FAQ

Do I need to know rust to use or contribute to embedanything?

The answer is No. EmbedAnything provides you pyo3 bindings, so you can run any function in python without any issues. To contibute you should check out our guidelines and python folder example of adapters.

How is it different from fastembed?

We provide both backends, candle and onnx. On top of it we also give an end-to-end pipeline, that is you can ingest different data-types and index to any vector database, and inference any model. Fastembed is just an onnx-wrapper.

We've received quite a few questions about why we're using Candle.

One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.

🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.

  • Roadmap
  • Quick Start
  • Guidelines
  • 🏎️ RoadMap

    Accomplishments

    One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done.

    Adding Fine-tuning

    One of the major goals of this year is to add finetuning these models on your data. Like a simple sentence transformer does.

    🖼️ Modalities and Source

    We’re excited to share that we've expanded our platform to support multiple modalities, including:

    • Audio files

    • Markdowns

    • Websites

    • Images

    • Videos

    • Graph

    This gives you the flexibility to work with various data types all in one place! 🌐

    ⚙️ Performance

    We now support both candle and Onnx backend
    ➡️ Support for GGUF models

    🫐Embeddings:

    We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.

    ➡️ Graph embedding -- build deepwalks embeddings depth first and word to vec
    ➡️ Video Embedding
    ➡️ Yolo Clip

    🌊Expansion to other Vector Adapters

    We currently support a wide range of vector databases for streaming embeddings, including:

    • Elastic: thanks to amazing and active Elastic team for the contribution
    • Weaviate
    • Pinecone
    • Qdrant
    • Milvus
    • Chroma

    How to add an adpters: https://starlight-search.com/blog/2024/02/25/adapter-development-guide.md

    💥 Create WASM demos to integrate embedanything directly to the browser.

    💜 Add support for ingestion from remote sources

    ➡️ Support for S3 bucket
    ➡️ Support for azure storage
    ➡️ Support for google drive/dropbox

    But we're not stopping there! We're actively working to expand this list.

    Want to Contribute? If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly starlight-search@proton.me. Let's build something amazing together! 💡

    A big Thank you to all our StarGazers

    Star History

    Star History Chart

    Project details


    Download files

    Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

    Source Distribution

    embed_anything-0.6.4.tar.gz (985.0 kB view details)

    Uploaded Source

    Built Distributions

    embed_anything-0.6.4-cp313-cp313-win_amd64.whl (17.9 MB view details)

    Uploaded CPython 3.13Windows x86-64

    embed_anything-0.6.4-cp313-cp313-manylinux_2_34_x86_64.whl (20.6 MB view details)

    Uploaded CPython 3.13manylinux: glibc 2.34+ x86-64

    embed_anything-0.6.4-cp313-cp313-macosx_11_0_arm64.whl (18.9 MB view details)

    Uploaded CPython 3.13macOS 11.0+ ARM64

    embed_anything-0.6.4-cp312-cp312-win_amd64.whl (17.9 MB view details)

    Uploaded CPython 3.12Windows x86-64

    embed_anything-0.6.4-cp312-cp312-manylinux_2_34_x86_64.whl (20.6 MB view details)

    Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64

    embed_anything-0.6.4-cp312-cp312-macosx_11_0_arm64.whl (18.9 MB view details)

    Uploaded CPython 3.12macOS 11.0+ ARM64

    embed_anything-0.6.4-cp311-cp311-win_amd64.whl (17.9 MB view details)

    Uploaded CPython 3.11Windows x86-64

    embed_anything-0.6.4-cp311-cp311-manylinux_2_34_x86_64.whl (20.6 MB view details)

    Uploaded CPython 3.11manylinux: glibc 2.34+ x86-64

    embed_anything-0.6.4-cp311-cp311-macosx_11_0_arm64.whl (18.9 MB view details)

    Uploaded CPython 3.11macOS 11.0+ ARM64

    embed_anything-0.6.4-cp310-cp310-win_amd64.whl (17.9 MB view details)

    Uploaded CPython 3.10Windows x86-64

    embed_anything-0.6.4-cp310-cp310-manylinux_2_34_x86_64.whl (20.6 MB view details)

    Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64

    embed_anything-0.6.4-cp39-cp39-win_amd64.whl (17.9 MB view details)

    Uploaded CPython 3.9Windows x86-64

    embed_anything-0.6.4-cp39-cp39-manylinux_2_34_x86_64.whl (20.6 MB view details)

    Uploaded CPython 3.9manylinux: glibc 2.34+ x86-64

    File details

    Details for the file embed_anything-0.6.4.tar.gz.

    File metadata

    • Download URL: embed_anything-0.6.4.tar.gz
    • Upload date:
    • Size: 985.0 kB
    • Tags: Source
    • Uploaded using Trusted Publishing? Yes
    • Uploaded via: maturin/1.9.0

    File hashes

    Hashes for embed_anything-0.6.4.tar.gz
    Algorithm Hash digest
    SHA256 2bd0f1a3d211724eb825c5fef63eef3039b3fbaff62c5d36dc1a8907aa5f9f9d
    MD5 6ae7f1d9d2f54c097e9a541921cd0b2c
    BLAKE2b-256 8570d272e03d9c8cd7072ca30f623c5b2bc04659fb2e817d9e21866fcd0b6c45

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp313-cp313-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp313-cp313-win_amd64.whl
    Algorithm Hash digest
    SHA256 5da6110f1b7ceb8406660823118a8ba1234fdc97a8d7e640135a94f4b35ba69e
    MD5 8a76b2f095ec8a3b2ed2020282a2e92e
    BLAKE2b-256 b3d3006ef98741aa708f66fdf72e4f0d37db1798028765810548e83be6268a69

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp313-cp313-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp313-cp313-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 ab67d445389617cb017a272c12b0e8292fff4a18fc58d9f3b612edbc334f1517
    MD5 076d89f7b3e54a2c079fde2b6c110ba1
    BLAKE2b-256 cdd0f0b70572d694b4a6573445d917711faea437ddd016a57924d374302fa96c

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp313-cp313-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp313-cp313-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 397dcbff98bdcb90ba0cb4c5a196e45ea1afef53bd453609e9e8301e139dc1a5
    MD5 61b6c720bbcf62a250dc3f41918babfb
    BLAKE2b-256 20d5a9c4370e0c6122c8de1d56710d6293903ed6de85f494484436c3e8287793

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp312-cp312-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp312-cp312-win_amd64.whl
    Algorithm Hash digest
    SHA256 08094a16d8f18509583d623358ac6185a35357511cc54157816b32050bff2d1c
    MD5 325b9545e1a6ef3fd8e267a4abbc11a3
    BLAKE2b-256 8a04f44d1d5b6ed1da0d96172b1fd2692aac695ec5db1963f1da84a70e56b064

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp312-cp312-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp312-cp312-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 71f0395ea7466b0e7be2fb92bc2a49843646fd6f8d618a4d6556fa0a4b696be9
    MD5 11ec255b79a8f389fa5d0e916da573f5
    BLAKE2b-256 47faabb5cb8ba5ecd89a065b8efd75ecb3ac70283ba5e685429329cf15f4a471

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp312-cp312-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp312-cp312-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 5d38814a964036838cd682606b547b60261ddff387189edbff8a32fac0d4361d
    MD5 02be2dfc0ac99935ea237c60d4b0b3d8
    BLAKE2b-256 b007f6823c56114773c7001615725e69d22d07f60cb475778f85c96e812d7eed

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp311-cp311-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp311-cp311-win_amd64.whl
    Algorithm Hash digest
    SHA256 944c3bb3a952330ebf304f1e90f90aa3c4eab82162d6a460e519c10111f69f86
    MD5 1027bf4a4429cf5a59f24eab0ff57006
    BLAKE2b-256 fb46a32856f7b564c2b4f65b8673767d146b57adb6c5b94f867133c891557e8b

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp311-cp311-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp311-cp311-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 c8b0d8cee7e479c2191d7745a8cc76aa967927f4f26665b835f8adae1ea86e6c
    MD5 0de89aa2e05f95ec414d44e2273e8778
    BLAKE2b-256 0a82890d2c6d3db99afa7dc3fb0c1794853039c0f2c536b0ad7b736b0a746f86

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp311-cp311-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp311-cp311-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 6a6103966393c598046b835cd95eb23c74009e4a308e036f9322d4e9ce968ba7
    MD5 e39c1efa8e433638acd314f19afed357
    BLAKE2b-256 024599e26c26900ec5a8d1786091513259c8496811a8d85c9df8cd055d631221

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp310-cp310-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp310-cp310-win_amd64.whl
    Algorithm Hash digest
    SHA256 c78e3e2cb034a132caa7b1f35ba996d703f4b9ea262979fd28c3eada3f36a61a
    MD5 4531ce4bedf5c502b8c6ddd21709413c
    BLAKE2b-256 23d88e0478fe062b5f3d645c7afde243937ea65a6a2fa7f252b81fbc259b6de5

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp310-cp310-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp310-cp310-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 b782e945c72d17e281cc6939a623d466d0e0a3e25b7b8e38fa94dea6415bf30a
    MD5 243b55bc46010119d008ab6346ac07b9
    BLAKE2b-256 58b37599df7f75f6aa033de89d95fa701457cad577119a0e6fba49eff9608c82

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp39-cp39-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp39-cp39-win_amd64.whl
    Algorithm Hash digest
    SHA256 13f3bf26cbb71c62dc04de00d79fc9f4c889b178b0cfe78b6fe608a828b2a669
    MD5 e67d7944c07390e57711ab61ace14079
    BLAKE2b-256 431adecde4c350ad50fcfe820f67321cb63477102f5812ea1a35e673ac210acf

    See more details on using hashes here.

    File details

    Details for the file embed_anything-0.6.4-cp39-cp39-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything-0.6.4-cp39-cp39-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 9067f324e02ed0339f0e79531f088d50f27b25a8ebb62de0e53f0141c0c6379d
    MD5 49fb199a438409aa552ae264ff6e0670
    BLAKE2b-256 3569a75e434fb21dae34e741804318b6a44fd3efe98e58b4d7f56f7aa2acc31a

    See more details on using hashes here.

    Supported by

    AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page