Skip to main content

Integration of TextEmbed with llama-index for embeddings.

Project description

TextEmbed - Embedding Inference Server

Maintained by Keval Dekivadiya, TextEmbed is licensed under the Apache-2.0 License.

TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.

Features

  • High Throughput & Low Latency: Designed to handle a large number of requests efficiently.
  • Flexible Model Support: Works with various sentence-transformer models.
  • Scalable: Easily integrates into larger systems and scales with demand.
  • Batch Processing: Supports batch processing for better and faster inference.
  • OpenAI Compatible REST API Endpoint: Provides an OpenAI compatible REST API endpoint.
  • Single Line Command Deployment: Deploy multiple models via a single command for efficient deployment.
  • Support for Embedding Formats: Supports binary, float16, and float32 embeddings formats for faster retrieval.

Getting Started

Prerequisites

Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies.

Installation via PyPI

Install the required dependencies:

pip install -U textembed

Start the TextEmbed Server

Start the TextEmbed server with your desired models:

python -m textembed.server --models sentence-transformers/all-MiniLM-L12-v2 --workers 4 --api-key TextEmbed

Example Usage with llama-index

Here's a simple example to get you started with llama-index:

from llama_index.embeddings.textembed import TextEmbedEmbedding

# Initialize the TextEmbedEmbedding class
embed = TextEmbedEmbedding(
    model_name="sentence-transformers/all-MiniLM-L12-v2",
    base_url="http://0.0.0.0:8000/v1",
    auth_token="TextEmbed",
)

# Get embeddings for a batch of texts
embeddings = embed.get_text_embedding_batch(
    [
        "It is raining cats and dogs here!",
        "India has a diverse cultural heritage.",
    ]
)

print(embeddings)

For more information, please read the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_embeddings_textembed-0.3.1.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_embeddings_textembed-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_embeddings_textembed-0.3.1.tar.gz
Algorithm Hash digest
SHA256 9985b29aa571361d57c0f3afb8db39ea5b5d5a61a4e03de1a48f8f275723bcb6
MD5 28ec4dd699ac1ea326fe729bd949e7d5
BLAKE2b-256 f2db6410c71c33cb8cbd8d58e450938aec9db34fb3b272dc1a0196d18aec15ab

See more details on using hashes here.

File details

Details for the file llama_index_embeddings_textembed-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_embeddings_textembed-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d1a8d06d56d95fed1bc56282a73b2d13d9712ab5565ae3e1f78008a919de404b
MD5 866f9d480a8ad281833c0795dd3a005a
BLAKE2b-256 b1f2399c2ac83d9ccab33590b7f3b835fb64c83477239119cb79b80b429604b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page