Skip to main content

Python Client for Indexify

Project description

Indexify Python SDK

PyPI version Discord

This is the Python SDK to build real-time continuously running unstructured data processing pipelines with Indexify.

Start by writing and testing your pipelines locally using your data, then deploy them into the Indexify service to process data in real-time at scale.

Installation

pip install indexify

Examples

PDF Document Extraction

  1. Extracts text, tables and images from an ingested PDF file
  2. Indexes the text using MiniLM-L6-v2, the images with CLIP
  3. Writes the results into a vector database.

Youtube Transcription Summarizer

  1. Downloads Youtube Video
  2. Extracts audio from the video and transcribes using Faster Whisper
  3. Uses Llama 3.1 backed by Llama.cpp to understand and classify the nature of the video.
  4. Routes the transcription dynamically to one of the transcription summarizer to retain specific summarization attributes.
  5. Finally the entire transcription is embedded and stored in a vector database for retrieval.

Quick Start

  1. Write data processing functions in Python and use Pydantic objects for returning complex data types from functions
  2. Connect functions using a graph interface. Indexify automatically stores function outputs and passes them along to downstream functions.
  3. If a function returns a list, the downstream functions will be called with each item in the list in parallel.
  4. The input of the first function becomes the input to the HTTP endpoint of the Graph.

Functional Features

  1. There is NO limit to volume of data being ingested since we use blob stores for storing metadata and objects
  2. The server can handle 10s of 1000s of files being ingested into the graphs in parallel.
  3. The scheduler reacts under 8 microseconds to ingestion events, so it's suitable for workflows which needs to run in realtime.
  4. Batch ingestion is handled gracefully by batching ingested data and scheduling for high throughput in production settings.
from pydantic import BaseModel
from indexify import indexify_function
from typing import Dict, Any, Optional, List

# Define function inputs and outputs
class Document(BaseModel):
    text: str
    metadata: Dict[str, Any]

class TextChunk(BaseModel):
    text: str
    metadata: Dict[str, Any]
    embedding: Optional[List[float]] = None


# Decorate a function which is going to be part of your data processing graph
@indexify_function()
def split_text(doc: Document) -> List[TextChunk]:
    midpoint = len(doc.text) // 2
    first_half = TextChunk(text=doc.text[:midpoint], metadata=doc.metadata)
    second_half = TextChunk(text=doc.text[midpoint:], metadata=doc.metadata)
    return [first_half, second_half]

# Any requirements specified is automatically installed in production clusters
@indexify_function(requirements=["langchain_text_splitter"])
def compute_embedding(chunk: TextChunk) -> TextChunk:
    chunk.embedding = [0.1, 0.2, 0.3]
    return chunk

# You can constrain functions to run on specific executors 
@indexify_function(executor_runtime_name="postgres-driver-image")
def write_to_db(chunk: TextChunk):
    # Write to your favorite vector database
    ...

## Create a graph
from indexify import Graph

g = Graph(name="my_graph", start_node=split_text)
g.add_edge(split_text, compute_embedding)
g.add_edge(embed_text, write_to_db)

Graph Execution

Every time the Graph is invoked, Indexify will provide an Invocation Id which can be used to know about the status of the processing and any outputs from the Graph.

Run the Graph Locally

from indexify import IndexifyClient

client = IndexifyClient(local=True)
client.register_graph(g)
invocation_id = client.invoke_graph_with_object(g.name, Document(text="Hello, world!", metadata={"source": "test"}))
graph_outputs = client.graph_outputs(g.name, invocation_id)

Deploy the Graph to Indexify Server for Production

Work In Progress - The version of server that works with python based graphs haven't been released yet. It will be shortly released. Join discord for development updates.

from indexify import IndexifyClient

client = IndexifyClient(service_url="http://localhost:8900")
client.register_graph(g)

Ingestion into the Service

Extraction Graphs continuously run on the Indexify Service like any other web service. Indexify Server runs the extraction graphs in parallel and in real-time when new data is ingested into the service.

output_id = client.invoke_graph_with_object(g.name, Document(text="Hello, world!", metadata={"source": "test"}))

Retrieve Graph Outputs for a given ingestion object

graph_outputs = client.graph_outputs(g.name, output_id)

Retrieve All Graph Inputs

graph_inputs = client.graph_inputs(g.name)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indexify-0.2.8.tar.gz (30.3 kB view details)

Uploaded Source

Built Distribution

indexify-0.2.8-py3-none-any.whl (38.9 kB view details)

Uploaded Python 3

File details

Details for the file indexify-0.2.8.tar.gz.

File metadata

  • Download URL: indexify-0.2.8.tar.gz
  • Upload date:
  • Size: 30.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for indexify-0.2.8.tar.gz
Algorithm Hash digest
SHA256 798c3ad1199d7cd0042694376e08049e7572881cf5db0adabea6398eef15e179
MD5 23a0ad36cafc31ef5a20bb38436f82c5
BLAKE2b-256 ebe0f20577fad697cc6feaa6a4a7aa8853839fbe1302de8ace1045ab168dd16b

See more details on using hashes here.

File details

Details for the file indexify-0.2.8-py3-none-any.whl.

File metadata

  • Download URL: indexify-0.2.8-py3-none-any.whl
  • Upload date:
  • Size: 38.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for indexify-0.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 5d26f25493f52ee6a49e2f75071761ae5bdc8bae4f51db90af0f882f8f7ce559
MD5 d788a00a2b6be21d0206f758b61bd1f0
BLAKE2b-256 2d0f75196db1f62de322f6349f3b521f4d9e4ec9256e835785bceb2e21b1b99c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page