Skip to main content

Embed anything at lightning speed

Project description

Downloads gpu Open in Colab roadmap MkDocs

Highly Performant, Modular and Memory Safe
Ingestion, Inference and Indexing in Rust 🦀
Python docs »
Rust docs »
Benchmarks · FAQ · Adapters . Collaborations . Notebooks

EmbedAnything is a minimalist, yet highly performant, modular, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX, model2vec and late-interaction embeddings, offering flexibility for a wide range of use cases.

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. How to add custom model and chunk size

🚀 Key Features

  • No Dependency on Pytorch: Easy to deploy on cloud, comes with low memory footprint.
  • Highly Modular : Choose any vectorDB adapter for RAG, with 1 line 1 word of code
  • Candle Backend : Supports BERT, Jina, ColPali, Splade, ModernBERT, Reranker, Qwen
  • ONNX Backend: Supports BERT, Jina, ColPali, ColBERT Splade, Reranker, ModernBERT, Qwen
  • Cloud Embedding Models:: Supports OpenAI, Cohere, and Gemini.
  • MultiModality : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
  • GPU support : Hardware acceleration on GPU as well.
  • Chunking : In-built chunking methods like semantic, late-chunking
  • Vector Streaming: Separate file processing, Indexing and Inferencing on different threads, reduces latency.

💡What is Vector Streaming

Embedding models are computationally expensive and time-consuming. By separating document preprocessing from model inference, you can significantly reduce pipeline latency and improve throughput.

Vector streaming transforms a sequential bottleneck into an efficient, concurrent workflow.

The embedding process happens separetly from the main process, so as to maintain high performance enabled by rust MPSC, and no memory leak as embeddings are directly saved to vector database. Find our blog.

EmbedAnythingXWeaviate

🦀 Why Embed Anything

➡️Faster execution.
➡️No Pytorch Dependency, thus low-memory footprint and easy to deploy on cloud.
➡️True multithreading
➡️Running embedding models locally and efficiently
➡️In-built chunking methods like semantic, late-chunking
➡️Supports range of models, Dense, Sparse, Late-interaction, ReRanker, ModernBert.
➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages

🍓 Our Past Collaborations:

We have collaborated with reputed enterprise like Elastic, Weaviate, SingleStore, Milvus and Analytics Vidya Datahours

You can get in touch with us for further collaborations.

Benchmarks

Inference Speed benchmarks.

Only measures embedding model inference speed, on onnx-runtime. Code

Benchmarks with other fromeworks coming soon!! 🚀

⭐ Supported Models

We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.

How to add custom model on candle: from_pretrained_hf

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a custom BERT model from Hugging Face
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure embedding parameters
config = TextEmbedConfig(
    chunk_size=1000,      # Maximum characters per chunk
    batch_size=32,        # Number of chunks to process in parallel
    splitting_strategy="sentence"  # How to split text: "sentence", "word", or "semantic"
)

# Embed a file (supports PDF, TXT, MD, etc.)
data = embed_anything.embed_file("path/to/your/file.pdf", embedder=model, config=config)

# Access the embeddings and text
for item in data:
    print(f"Text: {item.text[:100]}...")  # First 100 characters
    print(f"Embedding shape: {len(item.embedding)}")
    print(f"Metadata: {item.metadata}")
    print("---" * 20)
Model HF link
Jina Jina Models
Bert All Bert based models
CLIP openai/clip-*
Whisper OpenAI Whisper models
ColPali starlight-ai/colpali-v1.2-merged-onnx
Colbert answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more
Splade Splade Models and other Splade like models
Model2Vec model2vec, minishlab/potion-base-8M
Qwen3-Embedding Qwen/Qwen3-Embedding-0.6B
Reranker Jina Reranker Models, Xenova/bge-reranker, Qwen/Qwen3-Reranker-4B

Splade Models (Sparse Embeddings)

Sparse embeddings are useful for keyword-based retrieval and hybrid search scenarios.

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a SPLADE model for sparse embeddings
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.SparseBert, 
    model_id="prithivida/Splade_PP_en_v1"
)

# Configure the embedding process
config = TextEmbedConfig(chunk_size=1000, batch_size=32)

# Embed text files
data = embed_anything.embed_file("test_files/document.txt", embedder=model, config=config)

# Sparse embeddings are useful for hybrid search (combining dense and sparse)
for item in data:
    print(f"Text: {item.text}")
    print(f"Sparse embedding (non-zero values): {sum(1 for x in item.embedding if x != 0)}")

ONNX-Runtime: from_pretrained_onnx

ONNX models provide faster inference and lower memory usage. Use the ONNXModel enum for pre-configured models or provide a custom model path.

BERT Models

from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype, TextEmbedConfig
import embed_anything

# Option 1: Use a pre-configured ONNX model (recommended)
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id=ONNXModel.BGESmallENV15Q  # Quantized BGE model for faster inference
)

# Option 2: Use a custom ONNX model from Hugging Face
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id="onnx_model_link",
    dtype=Dtype.F16  # Use half precision for faster inference
)

# Embed files with ONNX model
config = TextEmbedConfig(chunk_size=1000, batch_size=32)
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)

ModernBERT (Quantized)

ModernBERT is a state-of-the-art BERT variant optimized for efficiency.

from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype

# Load quantized ModernBERT for maximum efficiency
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id=ONNXModel.ModernBERTBase, 
    dtype=Dtype.Q4F16  # 4-bit quantized for minimal memory usage
)

# Use it like any other model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)

ColPali (Document Embedding)

ColPali is optimized for document and image-text embedding tasks.

from embed_anything import ColpaliModel
import numpy as np

# Load ColPali ONNX model
model = ColpaliModel.from_pretrained_onnx(
    "starlight-ai/colpali-v1.2-merged-onnx", 
    None
)

# Embed a PDF file (ColPali processes pages as images)
data = model.embed_file("test_files/document.pdf", batch_size=1)

# Query the embedded document
query = "What is the main topic?"
query_embedding = model.embed_query(query)

# Calculate similarity scores
file_embeddings = np.array([e.embedding for e in data])
query_emb = np.array([e.embedding for e in query_embedding])

# Find most relevant pages
scores = np.einsum("bnd,csd->bcns", query_emb, file_embeddings).max(axis=3).sum(axis=2).squeeze()
top_pages = np.argsort(scores)[::-1][:5]

for page_idx in top_pages:
    print(f"Page {data[page_idx].metadata['page_number']}: {data[page_idx].text[:200]}")

ColBERT (Late-Interaction Embeddings)

ColBERT provides token-level embeddings for fine-grained semantic matching.

from embed_anything import ColbertModel
import numpy as np

# Load ColBERT ONNX model
model = ColbertModel.from_pretrained_onnx(
    "jinaai/jina-colbert-v2", 
    path_in_repo="onnx/model.onnx"
)

# Embed sentences
sentences = [
    "The quick brown fox jumps over the lazy dog", 
    "The cat is sleeping on the mat", 
    "The dog is barking at the moon", 
    "I love pizza", 
    "The dog is sitting in the park"
]

# ColBERT returns token-level embeddings
embeddings = model.embed(sentences, batch_size=2)

# Each embedding is a matrix: [num_tokens, embedding_dim]
for i, emb in enumerate(embeddings):
    print(f"Sentence {i+1}: {sentences[i]}")
    print(f"Embedding shape: {emb.shape}")  # Shape: (num_tokens, embedding_dim)

ReRankers

Rerankers improve retrieval quality by re-scoring candidate documents.

from embed_anything import Reranker, Dtype, RerankerResult, DocumentRank

# Load a reranker model
reranker = Reranker.from_pretrained(
    "jinaai/jina-reranker-v1-turbo-en", 
    dtype=Dtype.F16
)

# Query and candidate documents
query = "What is the capital of France?"
candidates = [
    "France is a country in Europe.", 
    "Paris is the capital of France.",
    "The Eiffel Tower is in Paris."
]

# Rerank documents (returns top-k results)
results: list[RerankerResult] = reranker.rerank(
    [query], 
    candidates, 
    top_k=2  # Return top 2 results
)

# Access reranked results
for result in results:
    documents: list[DocumentRank] = result.documents
    for doc in documents:
        print(f"Score: {doc.score:.4f} | Text: {doc.text}")

Cloud Embedding Models (Cohere Embed v4)

Use cloud models for high-quality embeddings without local model deployment.

from embed_anything import EmbeddingModel, WhichModel
import os

# Set your API key
os.environ["COHERE_API_KEY"] = "your-api-key-here"

# Initialize the cloud model
model = EmbeddingModel.from_pretrained_cloud(
    WhichModel.CohereVision, 
    model_id="embed-v4.0"
)

# Use it like any other model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)

Qwen 3 - Embedding

Qwen3 supports over 100 languages including various programming languages.

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig, Dtype
import numpy as np

# Initialize Qwen3 embedding model
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Qwen3, 
    model_id="Qwen/Qwen3-Embedding-0.6B",
    dtype=Dtype.F32
)

# Configure embedding
config = TextEmbedConfig(
    chunk_size=1000,
    batch_size=2,
    splitting_strategy="sentence"
)

# Embed a file
data = model.embed_file("test_files/document.pdf", config=config)

# Query embedding
query = "Which GPU is used for training"
query_embedding = np.array(model.embed_query([query])[0].embedding)

# Calculate similarities
embedding_array = np.array([e.embedding for e in data])
similarities = np.matmul(query_embedding, embedding_array.T)

# Get top results
top_5_indices = np.argsort(similarities)[-5:][::-1]
for idx in top_5_indices:
    print(f"Score: {similarities[idx]:.4f} | {data[idx].text[:200]}")

For Semantic Chunking

Semantic chunking preserves meaning by splitting text at semantically meaningful boundaries rather than fixed sizes.

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Main embedding model for generating final embeddings
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Semantic encoder for determining chunk boundaries
# This model analyzes text to find natural semantic breaks
semantic_encoder = EmbeddingModel.from_pretrained_hf(
    WhichModel.Jina, 
    model_id="jinaai/jina-embeddings-v2-small-en"
)

# Configure semantic chunking
config = TextEmbedConfig(
    chunk_size=1000,                    # Target chunk size
    batch_size=32,                      # Batch processing size
    splitting_strategy="semantic",      # Use semantic splitting
    semantic_encoder=semantic_encoder    # Model for semantic analysis
)

# Embed with semantic chunking
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)

# Chunks will be split at semantically meaningful boundaries
for item in data:
    print(f"Chunk: {item.text[:200]}...")
    print("---" * 20)

For Late-Chunking

Late-chunking splits text into smaller units first, then combines them during embedding for better context preservation.

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig, EmbedData

# Load your embedding model
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure late-chunking
config = TextEmbedConfig(
    chunk_size=1000,              # Maximum chunk size
    batch_size=8,                 # Batch size for processing
    splitting_strategy="sentence", # Split by sentences first
    late_chunking=True,           # Enable late-chunking
)

# Embed a file with late-chunking
data: list[EmbedData] = model.embed_file("test_files/attention.pdf", config=config)

# Late-chunking helps preserve context across sentence boundaries
for item in data:
    print(f"Text: {item.text}")
    print(f"Embedding dimension: {len(item.embedding)}")
    print("---" * 20)

🧑‍🚀 Getting Started

💚 Installation

pip install embed-anything

For GPUs and using special models like ColPali

pip install embed-anything-gpu

🚧❌ If it shows cuda error while running on windowns, run the following command:

os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin")

📒 Notebooks

End-to-End Retrieval and Reranking using VectorDB Adapters
ColPali-Onnx
Adapters
Qwen3- Embedings
Benchmarks

Usage

➡️ Usage For 0.3 and later version

Basic Text Embedding

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a model from Hugging Face
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Simple file embedding with default config
data = embed_anything.embed_file("test_files/test.pdf", embedder=model)

# Access results
for item in data:
    print(f"Text chunk: {item.text[:100]}...")
    print(f"Embedding shape: {len(item.embedding)}")

Advanced Usage with Configuration

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Jina,
    model_id="jinaai/jina-embeddings-v2-small-en"
)

# Configure embedding parameters
config = TextEmbedConfig(
    chunk_size=1000,              # Characters per chunk
    batch_size=32,                # Process 32 chunks at once
    buffer_size=64,               # Buffer size for streaming
    splitting_strategy="sentence" # Split by sentences
)

# Embed with custom configuration
data = embed_anything.embed_file(
    "test_files/document.pdf", 
    embedder=model, 
    config=config
)

# Process embeddings
for item in data:
    print(f"Chunk: {item.text}")
    print(f"Metadata: {item.metadata}")

Embedding Queries

from embed_anything import EmbeddingModel, WhichModel
import embed_anything
import numpy as np

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Embed a query
queries = ["What is machine learning?", "How does neural networks work?"]
query_embeddings = embed_anything.embed_query(queries, embedder=model)

# Use embeddings for similarity search
for i, query_emb in enumerate(query_embeddings):
    print(f"Query: {queries[i]}")
    print(f"Embedding shape: {len(query_emb.embedding)}")

Embedding Directories

from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure
config = TextEmbedConfig(chunk_size=1000, batch_size=32)

# Embed all files in a directory
data = embed_anything.embed_directory(
    "test_files/", 
    embedder=model, 
    config=config
)

print(f"Total chunks: {len(data)}")

Using ONNX Models

ONNX models provide faster inference and lower memory usage. You can use pre-configured models via the ONNXModel enum or load custom ONNX models.

Using Pre-configured ONNX Models (Recommended)

from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype, TextEmbedConfig
import embed_anything

# Use a pre-configured ONNX model (tested and optimized)
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert,
    model_id=ONNXModel.BGESmallENV15Q,  # Quantized BGE model
    dtype=Dtype.Q4F16                    # Quantized 4-bit float16
)

# Embed files
config = TextEmbedConfig(chunk_size=1000, batch_size=32)
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)

Using Custom ONNX Models

For custom or fine-tuned models, specify the Hugging Face model ID and path to the ONNX file:

from embed_anything import EmbeddingModel, WhichModel, Dtype

# Load a custom ONNX model from Hugging Face
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Jina,
    hf_model_id="jinaai/jina-embeddings-v2-small-en",
    path_in_repo="model.onnx",  # Path to ONNX file in the repo
    dtype=Dtype.F16              # Use half precision
)

# Use the model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)

Note: Using pre-configured models (via ONNXModel enum) is recommended as these models are tested and optimized. For a complete list of supported ONNX models, see ONNX Models Guide.

⁉️FAQ

Do I need to know rust to use or contribute to embedanything?

The answer is No. EmbedAnything provides you pyo3 bindings, so you can run any function in python without any issues. To contibute you should check out our guidelines and python folder example of adapters.

How is it different from fastembed?

We provide both backends, candle and onnx. On top of it we also give an end-to-end pipeline, that is you can ingest different data-types and index to any vector database, and inference any model. Fastembed is just an onnx-wrapper.

We've received quite a few questions about why we're using Candle.

One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.

🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.

  • Roadmap
  • Quick Start
  • Guidelines
  • 🏎️ RoadMap

    Accomplishments

    One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done.

    🖼️ Modalities and Source

    We’re excited to share that we've expanded our platform to support multiple modalities, including:

    • Audio files

    • Markdowns

    • Websites

    • Images

    • Videos

    • Graph

    This gives you the flexibility to work with various data types all in one place! 🌐

    ⚙️ Performance

    We now support both candle and Onnx backend
    ➡️ Support for GGUF models

    🫐Embeddings:

    We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.

    ➡️ Graph embedding -- build deepwalks embeddings depth first and word to vec
    ➡️ Video Embedding
    ➡️ Yolo Clip

    🌊Expansion to other Vector Adapters

    We currently support a wide range of vector databases for streaming embeddings, including:

    • Elastic: thanks to amazing and active Elastic team for the contribution
    • Weaviate
    • Pinecone
    • Qdrant
    • Milvus
    • Chroma

    How to add an adpters: https://starlight-search.com/blog/2024/02/25/adapter-development-guide.md

    💥 Create WASM demos to integrate embedanything directly to the browser.

    💜 Add support for ingestion from remote sources

    ➡️ Support for S3 bucket
    ➡️ Support for azure storage
    ➡️ Support for google drive/dropbox

    But we're not stopping there! We're actively working to expand this list.

    Want to Contribute? If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly turingatverge@gmail.com . Let's build something amazing together! 💡

    AWESOME Projects built on EmbedAnything.

    1. A Rust-based cursor like chat with your codebase tool: https://github.com/timpratim/cargo-chat
    2. A simple vector-based search engine, also supports ordinary text search : https://github.com/szuwgh/vectorbase2
    3. Semantic file tracker in CLI operated through daemon built with rust.: https://github.com/sam-salehi/sophist
    4. FogX-Store is a dataset store service that collects and serves large robotics datasets : https://github.com/J-HowHuang/FogX-Store
    5. A Dart Wrapper for EmbedAnything Crate: https://github.com/cotw-fabier/embedanythingindart
    6. Generate embeddings in Rust with tauri on MacOS : https://github.com/do-me/tauri-embedanything-ios
    7. RAG with EmbedAnything and Milvus: https://milvus.io/docs/v2.5.x/build_RAG_with_milvus_and_embedAnything.md

    A big Thank you to all our StarGazers

    Star History

    Star History Chart

    Project details


    Download files

    Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

    Source Distribution

    embed_anything_gpu-0.7.0.tar.gz (1.0 MB view details)

    Uploaded Source

    Built Distributions

    If you're not sure about the file name format, learn more about wheel file names.

    embed_anything_gpu-0.7.0-cp314-cp314-win_amd64.whl (21.5 MB view details)

    Uploaded CPython 3.14Windows x86-64

    embed_anything_gpu-0.7.0-cp314-cp314-manylinux_2_34_x86_64.whl (22.4 MB view details)

    Uploaded CPython 3.14manylinux: glibc 2.34+ x86-64

    embed_anything_gpu-0.7.0-cp314-cp314-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.14macOS 11.0+ ARM64

    embed_anything_gpu-0.7.0-cp313-cp313-win_amd64.whl (21.5 MB view details)

    Uploaded CPython 3.13Windows x86-64

    embed_anything_gpu-0.7.0-cp313-cp313-manylinux_2_34_x86_64.whl (22.4 MB view details)

    Uploaded CPython 3.13manylinux: glibc 2.34+ x86-64

    embed_anything_gpu-0.7.0-cp313-cp313-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.13macOS 11.0+ ARM64

    embed_anything_gpu-0.7.0-cp312-cp312-win_amd64.whl (21.5 MB view details)

    Uploaded CPython 3.12Windows x86-64

    embed_anything_gpu-0.7.0-cp312-cp312-manylinux_2_34_x86_64.whl (22.4 MB view details)

    Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64

    embed_anything_gpu-0.7.0-cp312-cp312-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.12macOS 11.0+ ARM64

    embed_anything_gpu-0.7.0-cp311-cp311-win_amd64.whl (21.5 MB view details)

    Uploaded CPython 3.11Windows x86-64

    embed_anything_gpu-0.7.0-cp311-cp311-manylinux_2_34_x86_64.whl (22.4 MB view details)

    Uploaded CPython 3.11manylinux: glibc 2.34+ x86-64

    embed_anything_gpu-0.7.0-cp311-cp311-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.11macOS 11.0+ ARM64

    embed_anything_gpu-0.7.0-cp310-cp310-win_amd64.whl (21.5 MB view details)

    Uploaded CPython 3.10Windows x86-64

    embed_anything_gpu-0.7.0-cp310-cp310-manylinux_2_34_x86_64.whl (22.4 MB view details)

    Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64

    embed_anything_gpu-0.7.0-cp310-cp310-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.10macOS 11.0+ ARM64

    embed_anything_gpu-0.7.0-cp39-cp39-macosx_11_0_arm64.whl (23.9 MB view details)

    Uploaded CPython 3.9macOS 11.0+ ARM64

    File details

    Details for the file embed_anything_gpu-0.7.0.tar.gz.

    File metadata

    • Download URL: embed_anything_gpu-0.7.0.tar.gz
    • Upload date:
    • Size: 1.0 MB
    • Tags: Source
    • Uploaded using Trusted Publishing? No
    • Uploaded via: maturin/1.9.6

    File hashes

    Hashes for embed_anything_gpu-0.7.0.tar.gz
    Algorithm Hash digest
    SHA256 b9943d8a6f83128692f6b7702b7a50e9efc1cd5aacee56c59419d82e58d0f7f0
    MD5 2cdd715bebb0b47fd8c9595ed8dea520
    BLAKE2b-256 d8773e9f310952a4f45b4ba334fa6e12ec708cfa58a0dc6e6c834a02b3e89f87

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp314-cp314-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp314-cp314-win_amd64.whl
    Algorithm Hash digest
    SHA256 1c92b2cc61fe1df8d975175fcde0464d8ec8b189234ba387c4b9a0db457fddd4
    MD5 6db10e9e6ea5fea80e9dd8ab09b01a2b
    BLAKE2b-256 27886427c7b871ca9d42844f92c92a2987e251178239882f1d5a5734dc2b01f7

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp314-cp314-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp314-cp314-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 6b8b29ed71ed5ac284f9e46689c83ec72148fcf60fad767effc0e686d6fcf62c
    MD5 f3a667ccb0d31f960552ef12eea753bf
    BLAKE2b-256 7760fa2ccd3c8f1eba80f7da0fdff6f2e1d09faedc2ef9b6c8e68754d1d5c50e

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp314-cp314-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp314-cp314-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 033003c5e6acd15823ec733d6db5ee034116ee066ad38911ad61b566ded72485
    MD5 b30d7ac0d62f7d783170331971dfd89f
    BLAKE2b-256 aa53764bd5b17d673d3026436d4a122fb4644b3344326612e7687cc71c9fc422

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp313-cp313-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp313-cp313-win_amd64.whl
    Algorithm Hash digest
    SHA256 2d12aeba1b12b4b7984b03a854f812335af790ebe25d9813d0a638975b13b564
    MD5 6e22b927d5636aaafba22cf548c32d87
    BLAKE2b-256 d5e8ec51f6d556f7c23fd089fb9cc7088817a2000e9feea8eae2549b44295499

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp313-cp313-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp313-cp313-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 f614f025568fc00590024cd476ee23ad5e61b30cb66841e7de425c5ed71b7130
    MD5 329ac2412c20047fa98ee899718f7873
    BLAKE2b-256 0501d0719ac2d82c2b450cd3b5b8d237647ee356c926c55c05b8a01cde685d85

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp313-cp313-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp313-cp313-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 d55862a93202c4bb16bc3366939ffd9a8c1a6f3b56a9ad06748d4e1024a80057
    MD5 e68c5274a78f954fb46c40728986032a
    BLAKE2b-256 f30515895c97d2ce2f281ba60bf0aa8c5adef379c2f78096f3d69ed14f5cd325

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp312-cp312-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp312-cp312-win_amd64.whl
    Algorithm Hash digest
    SHA256 8fcccf21650ce07ea86fdd36d82b0df95ad07fd56f8bb2457a409ba6e0d6888e
    MD5 1fc3d37fe00f52f023fed59757586ebd
    BLAKE2b-256 d30ec1bdae8de745b20c33b99c04e2ad41f3262a7c23a130d3e446bebfc4143e

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp312-cp312-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp312-cp312-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 a82609df4e5f38598ab37a07d47aa4dc6f00f6b010f9a93d395a8f8b0b559bf6
    MD5 53e8beb814b95f0eacde49fa74cce7f1
    BLAKE2b-256 de5bf3a6df75fb0625c713cff64cc25f1e6ae4176250139cc884923aad32a3d3

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp312-cp312-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp312-cp312-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 dabb9c0e4aba68023c50eeb887a2ec4f33bc451dfb2ca2dd8c9c58c604abf3bf
    MD5 53777f6cdc825926d451f9d54b59a6ec
    BLAKE2b-256 4b879b8891144647397d499cae51ee374c49571e29d5cba5caac44ae3c5785d4

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp311-cp311-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp311-cp311-win_amd64.whl
    Algorithm Hash digest
    SHA256 206731abac73230b47fd197cd235063b5d7083b0e7cc52710b2293449b3c496e
    MD5 06a06e6b46b093991627efd91c315544
    BLAKE2b-256 207f3916e5ad5e7789ac1ce1cee6c9dd13829864f908756a32bfa81c907b3ec9

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp311-cp311-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp311-cp311-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 1e94f2b3f035f4a0d7cf62ee6aeb148e2eef0b49118b3461d3b1d6779da3d332
    MD5 2964967404fcd1c698f8f3a11096a1e7
    BLAKE2b-256 8785d89831ca0a124507bc39952cb14f7c2d278c2ad6a0cc3c58d697b01d802d

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp311-cp311-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp311-cp311-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 a7c9449f185590135861a588f59e0f8cd1f706bfccf667c1a56e8320a3bb5041
    MD5 c236407153ac9addc08aa1fde0d0ddee
    BLAKE2b-256 9060318282569f9cab3610174783d0889c415600c6dfe0d5660503015848a594

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp310-cp310-win_amd64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp310-cp310-win_amd64.whl
    Algorithm Hash digest
    SHA256 22caaa08f084b919c1821d47ebb20b53452b16b71a5e2ac825eccf9163cbb35e
    MD5 2cc311eb6e261fd6810fec37e54fd4eb
    BLAKE2b-256 6b3ed4d825b12a26d71556b33c13f434d200ffc674f7bb6a2140d0956dd18569

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp310-cp310-manylinux_2_34_x86_64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp310-cp310-manylinux_2_34_x86_64.whl
    Algorithm Hash digest
    SHA256 b164a4e98d2a3e9d70eabff7b4de21d18e57948ec66f790991f36a791c198c22
    MD5 c3810be9d3415bf83258b41391461534
    BLAKE2b-256 0e08537cfc8b24aae37bdfa1f95275d495ac447d8aac065bd151782ea73f57ff

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp310-cp310-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp310-cp310-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 c5aa2aade5dd2cc9d5bebe90a4566cf603968169c7db3c1112cc49a187989dd3
    MD5 cbe15c4c1ee5dbe41e67798e50057e64
    BLAKE2b-256 183b10d829d98e96dfcf1b60310684e62f827984eb2158b959aaa81cb2ee96d6

    See more details on using hashes here.

    File details

    Details for the file embed_anything_gpu-0.7.0-cp39-cp39-macosx_11_0_arm64.whl.

    File metadata

    File hashes

    Hashes for embed_anything_gpu-0.7.0-cp39-cp39-macosx_11_0_arm64.whl
    Algorithm Hash digest
    SHA256 066876dc5ac4d09b08410377a33cf1466d619a463b6a862b06fd26c7fab3ed44
    MD5 757b27979f36b3c3371abe7c0de07d31
    BLAKE2b-256 12271fd19d87d5799e92e605fdcd1f5e1c89f830bdb8678114dbc4b66db0801d

    See more details on using hashes here.

    Supported by

    AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page