Skip to main content

Embed anything at lightning speed

Project description

Downloads Open in Colab gpu package roadmap roadmap

🦀 Rust-powered Framework for Lightning-Fast End-to-End Embedding: From Source to VectorDB
Explore the docs »

View Demo · Examples · Vector Streaming Adapters . Search in Audio Space

EmbedAnything is a minimalist, highly performant, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX and late-interaction embeddings, offering flexibility for a wide range of use cases.

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. How to add custom model and chunk size

🚀 Key Features

  • Local Embedding : Works with local embedding models like BERT and JINA
  • ONNX Models: Works with ONNX models for BERT and ColPali
  • ColPali : Support for ColPali in GPU version
  • Splade : Support for sparse embeddings for hybrid
  • Cloud Embedding Models:: Supports OpenAI and Cohere.
  • MultiModality : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
  • Rust : All the file processing is done in rust for speed and efficiency
  • Candle : We have taken care of hardware acceleration as well, with Candle.
  • Python Interface: Packaged as a Python library for seamless integration into your existing projects.
  • Vector Streaming: Continuously create and stream embeddings if you have low resource.

💡What is Vector Streaming

Vector Streaming enables you to process and generate embeddings for files and stream them, so if you have 10 GB of file, it can continuously generate embeddings Chunk by Chunk, that you can segment semantically, and store them in the vector database of your choice, Thus it eliminates bulk embeddings storage on RAM at once.

EmbedAnythingXWeaviate

🦀 Why Embed Anything

➡️Faster execution.
➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages
➡️True multithreading
➡️Running language models or embedding models locally and efficiently
➡️Candle allows inferences on CUDA-enabled GPUs right out of the box.
➡️Decrease the memory usage of EmbedAnything.

⭐ Supported Models

We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.

How to add custom model on candle: from_pretrained_hf

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, model_id="model link from huggingface"
)
config = TextEmbedConfig(chunk_size=200, batch_size=32)
data = embed_anything.embed_file("file_address", embeder=model, config=config)
Model Custom link
Jina jinaai/jina-embeddings-v2-base-en
jinaai/jina-embeddings-v2-small-en
Bert sentence-transformers/all-MiniLM-L6-v2
sentence-transformers/all-MiniLM-L12-v2
sentence-transformers/paraphrase-MiniLM-L6-v2
Clip openai/clip-vit-base-patch32
Whisper Most OpenAI Whisper from huggingface supported.

Splade Models:

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.SparseBert, "prithivida/Splade_PP_en_v1"
)

ONNX-Runtime: from_pretrained_onnx

BERT

model = EmbeddingModel.from_pretrained_onnx(
  WhichModel.Bert, model_id="onnx_model_link"
)

ColPali

model: ColpaliModel = ColpaliModel.from_pretrained_onnx("starlight-ai/colpali-v1.2-merged-onnx", None)

For Semantic Chunking

model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# with semantic encoder
semantic_encoder = EmbeddingModel.from_pretrained_hf(WhichModel.Jina, model_id = "jinaai/jina-embeddings-v2-small-en")
config = TextEmbedConfig(chunk_size=256, batch_size=32, splitting_strategy = "semantic", semantic_encoder=semantic_encoder)

🧑‍🚀 Getting Started

💚 Installation

pip install embed-anything

For GPUs and using special models like ColPali

pip install embed-anything-gpu

Usage

➡️ Usage For 0.3 and later version

To use local embedding: we support Bert and Jina

model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert, model_id="Hugging_face_link"
)
data = embed_anything.embed_file("test_files/test.pdf", embeder=model)

For multimodal embedding: we support CLIP

Requirements Directory with pictures you want to search for example we have test_files with images of cat, dogs etc

import embed_anything
from embed_anything import EmbedData
model = embed_anything.EmbeddingModel.from_pretrained_local(
    embed_anything.WhichModel.Clip,
    model_id="openai/clip-vit-base-patch16",
    # revision="refs/pr/15",
)
data: list[EmbedData] = embed_anything.embed_directory("test_files", embeder=model)
embeddings = np.array([data.embedding for data in data])
query = ["Photo of a monkey?"]
query_embedding = np.array(
    embed_anything.embed_query(query, embeder=model)[0].embedding
)
similarities = np.dot(embeddings, query_embedding)
max_index = np.argmax(similarities)
Image.open(data[max_index].text).show()

Audio Embedding using Whisper

requirements: Audio .wav files.

import embed_anything
from embed_anything import (
    AudioDecoderModel,
    EmbeddingModel,
    embed_audio_file,
    TextEmbedConfig,
)
# choose any whisper or distilwhisper model from https://huggingface.co/distil-whisper or https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013
audio_decoder = AudioDecoderModel.from_pretrained_hf(
    "openai/whisper-tiny.en", revision="main", model_type="tiny-en", quantized=False
)
embeder = EmbeddingModel.from_pretrained_hf(
    embed_anything.WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L6-v2",
    revision="main",
)
config = TextEmbedConfig(chunk_size=200, batch_size=32)
data = embed_anything.embed_audio_file(
    "test_files/audio/samples_hp0.wav",
    audio_decoder=audio_decoder,
    embeder=embeder,
    text_embed_config=config,
)
print(data[0].metadata)

🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.

  • Roadmap
  • Quick Start
  • Guidelines
  • 🏎️ RoadMap

    Accomplishments

    One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done.

    🖼️ Modalities and Source

    We’re excited to share that we've expanded our platform to support multiple modalities, including:

    • Audio files

    • Markdowns

    • Websites

    • Images

    • Videos

    • Graph

    This gives you the flexibility to work with various data types all in one place! 🌐

    💜 Product

    We’ve rolled out some major updates in version 0.3 to improve both functionality and performance. Here’s what’s new:

    • Semantic Chunking: Optimized chunking strategy for better Retrieval-Augmented Generation (RAG) workflows.

    • Streaming for Efficient Indexing: We’ve introduced streaming for memory-efficient indexing in vector databases. Want to know more? Check out our article on this feature here: https://www.analyticsvidhya.com/blog/2024/09/vector-streaming/

    • Zero-Shot Applications: Explore our zero-shot application demos to see the power of these updates in action.

    • Intuitive Functions: Version 0.3 includes a complete refactor for more intuitive functions, making the platform easier to use.

    • Chunkwise Streaming: Instead of file-by-file streaming, we now support chunkwise streaming, allowing for more flexible and efficient data processing.

    Check out the latest release : and see how these features can supercharge your GenerativeAI pipeline! ✨

    🚀Coming Soon

    ⚙️ Performance

    We've received quite a few questions about why we're using Candle, so here's a quick explanation:

    One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.

    What’s Next? To address this, we’re excited to announce that we’re introducing Candle-ONNX along with our previous framework on hugging-face ,

    ➡️ Support for GGUF models

    • Significantly faster performance
    • Stay tuned for these exciting updates! 🚀

    🫐Embeddings:

    We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.

    ☑️Graph embedding -- build deepwalks embeddings depth first and word to vec
    ☑️Video Embedding
    ☑️ Yolo Clip

    🌊Expansion to other Vector Adapters

    We currently support a wide range of vector databases for streaming embeddings, including:

    • Elastic: thanks to amazing and active Elastic team for the contribution
    • Weaviate
    • Pinecone

    But we're not stopping there! We're actively working to expand this list.

    Want to Contribute? If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly starlight-search@proton.me. Let's build something amazing together! 💡

    Project details


    Download files

    Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

    Source Distributions

    No source distribution files available for this release.See tutorial on generating distribution archives.

    Built Distributions

    embed_anything_gpu-0.4.15-cp312-none-win_amd64.whl (12.7 MB view details)

    Uploaded CPython 3.12 Windows x86-64

    embed_anything_gpu-0.4.15-cp312-cp312-manylinux_2_31_x86_64.whl (15.2 MB view details)

    Uploaded CPython 3.12 manylinux: glibc 2.31+ x86-64

    embed_anything_gpu-0.4.15-cp311-none-win_amd64.whl (12.7 MB view details)

    Uploaded CPython 3.11 Windows x86-64

    embed_anything_gpu-0.4.15-cp311-cp311-manylinux_2_31_x86_64.whl (15.2 MB view details)

    Uploaded CPython 3.11 manylinux: glibc 2.31+ x86-64

    embed_anything_gpu-0.4.15-cp310-none-win_amd64.whl (12.7 MB view details)

    Uploaded CPython 3.10 Windows x86-64

    embed_anything_gpu-0.4.15-cp310-cp310-manylinux_2_31_x86_64.whl (15.2 MB view details)

    Uploaded CPython 3.10 manylinux: glibc 2.31+ x86-64

    embed_anything_gpu-0.4.15-cp39-none-win_amd64.whl (12.7 MB view details)

    Uploaded CPython 3.9 Windows x86-64

    embed_anything_gpu-0.4.15-cp39-cp39-manylinux_2_31_x86_64.whl (15.2 MB view details)

    Uploaded CPython 3.9 manylinux: glibc 2.31+ x86-64

    embed_anything_gpu-0.4.15-cp38-none-win_amd64.whl (12.7 MB view details)

    Uploaded CPython 3.8 Windows x86-64

    embed_anything_gpu-0.4.15-cp38-cp38-manylinux_2_31_x86_64.whl (15.2 MB view details)

    Uploaded CPython 3.8 manylinux: glibc 2.31+ x86-64

    File details

    Details for the file embed_anything_gpu-0.4.15-cp312-none-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp312-none-win_amd64.whl
    Algorithm Hash digest
    SHA256 166870deec7e9c73b7c71443654bd1321f13a71f77b42f0a97617e036d2bbef9
    MD5 9778bf66c6c6a08485a8f82556848448
    BLAKE2b-256 ff229e4ab33426c79f60b0a2b92904e9ee1f1a6e98a65ac336d2e79a4ac06b8f

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp312-cp312-manylinux_2_31_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp312-cp312-manylinux_2_31_x86_64.whl
    Algorithm Hash digest
    SHA256 dddb52a67909734c7bb9f5a7c6dcb1f0134183e2bb66cd99b5bc4c5a08d02bfd
    MD5 3bd092ee60105e11bca20987c8b479e7
    BLAKE2b-256 b64babe155d7ed9637a05f04338c12d7f3e86f85526b8ebbd8b71f9690d833f4

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp311-none-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp311-none-win_amd64.whl
    Algorithm Hash digest
    SHA256 1b27b642ef45909d40002fcee48e4508ca00beeb28e2462faedc7927529053e8
    MD5 98751cafbb6040010b6a8a61d924daad
    BLAKE2b-256 73bf5f843b0d5e0a381e14fd007aefcce89c2455539e17bd0a48e8d6300c7f72

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp311-cp311-manylinux_2_31_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp311-cp311-manylinux_2_31_x86_64.whl
    Algorithm Hash digest
    SHA256 1407a0304d8c10d6c7d548632ed52c8184de0ca151874c2584f59fd6db7667c1
    MD5 688d4bf243dd7f53c2839d6c850bba34
    BLAKE2b-256 a67ac51c8ae5da332a0c95a9e1be9ebf64ab4fa10b5643060cf8694a1b0b3798

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp310-none-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp310-none-win_amd64.whl
    Algorithm Hash digest
    SHA256 66f4a708719d902b518e8cbfc794d524d7abc03758225c18be1abff54965e602
    MD5 8964b41fe3ac41927d97bc5a80dea933
    BLAKE2b-256 aed2db17138f0a79ae8853a0d57f9a11c5ec67524cbd2d2b51df289e1e5cee7b

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp310-cp310-manylinux_2_31_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp310-cp310-manylinux_2_31_x86_64.whl
    Algorithm Hash digest
    SHA256 9d9603e98e994629875ec5f2a38fe15926887f67b4f4498b342f4c25fd1b13b5
    MD5 52de4bac7a4180b27fdfec2f07ddc2d5
    BLAKE2b-256 ef665c5b4990e9dcbf8793bd785931e8c93535a7a8d306290755568efe439159

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp39-none-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp39-none-win_amd64.whl
    Algorithm Hash digest
    SHA256 c1d84953abe45535d5407e3f188b84a1a1e0d4695114ff6ab16b62ad7213b2eb
    MD5 17d68ac4e8554e6d9902864cba1e8421
    BLAKE2b-256 a07820892cce70eeb5ae4361372bb68f85d802a1ec6fdbea11688e2263f441e9

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp39-cp39-manylinux_2_31_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp39-cp39-manylinux_2_31_x86_64.whl
    Algorithm Hash digest
    SHA256 12693e5af6cb08ca6c6392f852619e4d6632ce089fdf69eb89530aa066fba3cc
    MD5 b1086af4a2d70869d4b0f8d3d89acca0
    BLAKE2b-256 32b279dd44d00f9081f5941eed83ce12391bf6705699f366076187778492c856

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp38-none-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp38-none-win_amd64.whl
    Algorithm Hash digest
    SHA256 673cfe14c64e40cd45286b7615059816f88bde5bb1b38be0f19db3d52a778265
    MD5 45a09793f094689f732d585dafa05521
    BLAKE2b-256 ea3f4245ebed8d7e7548a01fe881ecd7dfc07d1d901e108396491a30d14c0f2c

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.4.15-cp38-cp38-manylinux_2_31_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.4.15-cp38-cp38-manylinux_2_31_x86_64.whl
    Algorithm Hash digest
    SHA256 a2af9bd766e22d2491e0aa4cc05de781799c0e97c8554d4ebabf82c6fda3895b
    MD5 cbad7a6f13f81d6404899c21ba9590a3
    BLAKE2b-256 a7e0d48694340c947722e343d9d9701c7c69b0373a6ebb8e455b81efd1dc2593

    See more details on using hashes here.

    Supported by

    AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page