Skip to main content

Embed anything at lightning speed

Project description

Framework for building local and multimodal embeddings built in Rust 🦀

Downloads Open in Colab license license

EmbedAnything is a powerful python library designed to streamline the creation and management of embedding pipelines. Whether you're working with text, images, audio, or any other type of data., EmbedAnything makes it easy to generate embeddings from multiple sources and store them efficiently in a vector database.

🦀The Benefit of Rust for Speed

By using Rust for its core functionalities, EmbedAnything offers significant speed advantages: Rust is Compiled: Unlike Python, Rust compiles directly to machine code, resulting in faster execution. Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages. Rust achieves true multithreading.

🚀Why Candle?...

Running language models or embedding models locally can be difficult, especially when you want to deploy a product that utilizes these models. If you use the transformers library from Hugging Face in Python, you will depend on PyTorch for tensor operations. This, in turn, has a dependency on Libtorch, which means that you will need to include the entire Libtorch library with your product. Also, Candle allows inferences on CUDA-enabled GPUs right out of the box. We will soon post on how we use Candle to increase the performance and decrease the memory usage of EmbedAnything.

Examples

  1. Image Search: Open in Colab

Watch the demo

🚀 Key Features

  • Local Embedding Works with local embedding models like AllminiLM
  • MultiModality Works with text and image and will soon expand to audio
  • Python Interface: Packaged as a Python library for seamless integration into your existing projects.
  • Efficient: Optimized for speed and performance, with core functionality written in Rust.
  • Scalable: Store embeddings in a vector database for easy retrieval and scalability.
  • OpenAI Works with openai as well

💚 Installation

pip install embed-anything

🧑‍🚀 Getting Started

For local models

To use local embedding: we support Bert and Jina

import embed_anything
data = embed_anything.embed_file("filename.pdf", embeder= "Bert")
embeddings = np.array([data.embedding for data in data])

For multimodal embedding: we support CLIP

Requirements Directory with pictures you want to search for example we have test_files with images of cat, dogs etc

import embed_anything
data = embed_anything.embed_directory("test_files", embeder= "Clip")
embeddings = np.array([data.embedding for data in data])

query = "photo of a dog"
query_embedding = np.array(embed_anything.embed_query(query, embeder= "Clip")[0].embedding)
similarities = np.dot(embeddings, query_embedding)
max_index = np.argmax(similarities)
Image.open(data[max_index].text).show()

For OpenAI

  1. Please check if you already have the OpenAI key in the Environment variable.

If you are using embed-anything==0.1.7 version (latest version)

import embed_anything
data = embed_anything.embed_file("filename.pdf", embeder= "OpenAI")
embeddings = np.array([data.embedding for data in data])

🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.

Table of Content:

  1. [Code of conduct]
  2. [Quick Start]

✔️ Code of Conduct:

Please read our [Code of Conduct] to understand the expectations we have for all contributors participating in this project. By participating, you agree to abide by our Code of Conduct.

🚀 Quick Start

You can quickly get started with contributing by searching for issues with the labels "Good First Issue" or "Help Needed" in the [Issues Section]. If you think you can contribute, comment on the issue and we will assign it to you.

To set up your development environment, please follow the steps mentioned below :

  1. Fork the repository and create a clone of the fork
  2. Create a branch for a feature or a bug you are working on in your fork
  3. If you are working with OpenAI make sure you have the keys

Contributing Guidelines

🔍 Reporting Bugs

  1. Title describing the issue clearly and concisely with relevant labels
  2. Provide a detailed description of the problem and the necessary steps to reproduce the issue.
  3. Include any relevant logs, screenshots, or other helpful information supporting the issue.

💡 New Feature or Suggesting Enhancements

☑️ ToDo

  • Vector Database Add functionalities to integrate with any Vector Database

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

embed_anything-0.1.15-cp310-cp310-manylinux_2_34_x86_64.whl (12.8 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.34+ x86-64

File details

Details for the file embed_anything-0.1.15-cp310-cp310-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for embed_anything-0.1.15-cp310-cp310-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 64b856c1f2dcca2d14678d7bd89f234cf6a7f2bb94bfcdbf1abd9ae29dfbddbf
MD5 c6bc4ebb1e3f12988befa238efb0dd35
BLAKE2b-256 3af1f85dfe6162cde05518d6fcacc83e61a0ea02ce0eab78536bf2c28adc3c78

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page