Skip to main content

Python Client for Indexify

Project description

Indexify Python SDK

PyPI version Discord

This is the Python SDK to build real-time continuously running unstructured data processing pipelines with Indexify.

Start by writing and testing your pipelines locally using your data, then deploy them into the Indexify service to process data in real-time at scale.

Installation

pip install indexify

Examples

PDF Document Extraction

  1. Extracts text, tables and images from an ingested PDF file
  2. Indexes the text using MiniLM-L6-v2, the images with CLIP
  3. Writes the results into a vector database.

Youtube Transcription Summarizer

  1. Downloads Youtube Video
  2. Extracts audio from the video and transcribes using Faster Whisper
  3. Uses Llama 3.1 backed by Llama.cpp to understand and classify the nature of the video.
  4. Routes the transcription dynamically to one of the transcription summarizer to retain specific summarization attributes.
  5. Finally the entire transcription is embedded and stored in a vector database for retrieval.

Quick Start

  1. Write data processing functions in Python and use Pydantic objects for returning complex data types from functions
  2. Connect functions using a graph interface. Indexify automatically stores function outputs and passes them along to downstream functions.
  3. If a function returns a list, the downstream functions will be called with each item in the list in parallel.
  4. The input of the first function becomes the input to the HTTP endpoint of the Graph.

Functional Features

  1. There is NO limit to volume of data being ingested since we use blob stores for storing metadata and objects
  2. The server can handle 10s of 1000s of files being ingested into the graphs in parallel.
  3. The scheduler reacts under 8 microseconds to ingestion events, so it's suitable for workflows which needs to run in realtime.
  4. Batch ingestion is handled gracefully by batching ingested data and scheduling for high throughput in production settings.
from pydantic import BaseModel
from indexify import indexify_function
from typing import Dict, Any, Optional, List

# Define function inputs and outputs
class Document(BaseModel):
    text: str
    metadata: Dict[str, Any]

class TextChunk(BaseModel):
    text: str
    metadata: Dict[str, Any]
    embedding: Optional[List[float]] = None


# Decorate a function which is going to be part of your data processing graph
@indexify_function()
def split_text(doc: Document) -> List[TextChunk]:
    midpoint = len(doc.text) // 2
    first_half = TextChunk(text=doc.text[:midpoint], metadata=doc.metadata)
    second_half = TextChunk(text=doc.text[midpoint:], metadata=doc.metadata)
    return [first_half, second_half]

# Any requirements specified is automatically installed in production clusters
@indexify_function(requirements=["langchain_text_splitter"])
def compute_embedding(chunk: TextChunk) -> TextChunk:
    chunk.embedding = [0.1, 0.2, 0.3]
    return chunk

# You can constrain functions to run on specific executors 
@indexify_function(executor_runtime_name="postgres-driver-image")
def write_to_db(chunk: TextChunk):
    # Write to your favorite vector database
    ...

## Create a graph
from indexify import Graph

g = Graph(name="my_graph", start_node=split_text)
g.add_edge(split_text, compute_embedding)
g.add_edge(embed_text, write_to_db)

Graph Execution

Every time the Graph is invoked, Indexify will provide an Invocation Id which can be used to know about the status of the processing and any outputs from the Graph.

Run the Graph Locally

from indexify import IndexifyClient

client = IndexifyClient(local=True)
client.register_graph(g)
invocation_id = client.invoke_graph_with_object(g.name, Document(text="Hello, world!", metadata={"source": "test"}))
graph_outputs = client.graph_outputs(g.name, invocation_id)

Deploy the Graph to Indexify Server for Production

Work In Progress - The version of server that works with python based graphs haven't been released yet. It will be shortly released. Join discord for development updates.

from indexify import IndexifyClient

client = IndexifyClient(service_url="http://localhost:8900")
client.register_graph(g)

Ingestion into the Service

Extraction Graphs continuously run on the Indexify Service like any other web service. Indexify Server runs the extraction graphs in parallel and in real-time when new data is ingested into the service.

output_id = client.invoke_graph_with_object(g.name, Document(text="Hello, world!", metadata={"source": "test"}))

Retrieve Graph Outputs for a given ingestion object

graph_outputs = client.graph_outputs(g.name, output_id)

Retrieve All Graph Inputs

graph_inputs = client.graph_inputs(g.name)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indexify-0.2.12.tar.gz (30.4 kB view details)

Uploaded Source

Built Distribution

indexify-0.2.12-py3-none-any.whl (39.0 kB view details)

Uploaded Python 3

File details

Details for the file indexify-0.2.12.tar.gz.

File metadata

  • Download URL: indexify-0.2.12.tar.gz
  • Upload date:
  • Size: 30.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for indexify-0.2.12.tar.gz
Algorithm Hash digest
SHA256 f681d8d499b0e0a5533a6a950f6a5471c4377e074e3360c8dd238c9ff9abc256
MD5 6186ce087f849ffec90fb653d58143f8
BLAKE2b-256 fba9b61df152853014909eb53866c362df5f6a17762aed4ae5982321840d9c7d

See more details on using hashes here.

File details

Details for the file indexify-0.2.12-py3-none-any.whl.

File metadata

  • Download URL: indexify-0.2.12-py3-none-any.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for indexify-0.2.12-py3-none-any.whl
Algorithm Hash digest
SHA256 c2b1b5ac0e139635208dc8b23e47173564a1d46fe2739a531e36981abce523c0
MD5 16037249934594ceac5183d6373e5fe7
BLAKE2b-256 cf10765c26938ad6a3e0bc9b71f25c94ac557964b80d4fcf49c4b459ee35232e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page