Skip to main content

A LiteLLM LLM component for Neo4j graph RAG.

Project description

A LiteLLM LLM component for Neo4j Graph RAG (Retrieval-Augmented Generation) system.

Overview

neo4j_litellm is a Python package that provides a unified interface for integrating various Large Language Models (LLMs) with Neo4j Graph RAG framework using the LiteLLM library. It supports both synchronous and asynchronous model invocations with chat history and system instructions.

Features

  • Unified LLM Interface: Compatible with multiple LLM providers through LiteLLM

  • Neo4j GraphRAG Integration: Implements the LLMInterface from neo4j_graphrag

  • Sync & Async Support: Both invoke() and ainvoke() methods available

  • Chat History Support: Maintain conversation context with message history

  • System Instructions: Support for system prompts and instructions

  • Flexible Configuration: Configurable provider, model, API endpoints, and keys

Installation

pip install neo4j_litellm

Dependencies

  • litellm>=1.77.5 - Unified LLM interface library

  • neo4j_graphrag>=1.9.0 - Neo4j Graph RAG framework

Quick Start

Basic Usage

from neo4j_litellm import LiteLLMInterface, ChatHistory

# Initialize the LLM interface
llm = LiteLLMInterface(
    provider="openai",        # LLM provider (e.g., openai, anthropic, azure, etc.)
    model_name="gpt-3.5-turbo",  # Model name
    base_url="https://api.openai.com/v1",  # API base URL
    api_key="your-api-key-here"  # API key
)

# Simple invocation
response = llm.invoke("Hello, how are you?")
print(response.content)

With Chat History

from neo4j_litellm import LiteLLMInterface, ChatHistory
from typing import List

llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

# Create chat history
message_history: List[ChatHistory] = [
    {"role": "user", "content": "What's the capital of France?"},
    {"role": "assistant", "content": "The capital of France is Paris."}
]

# Invoke with chat history
response = llm.invoke(
    input="Tell me more about Paris",
    message_history=message_history
)
print(response.content)

With System Instruction

llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

response = llm.invoke(
    input="Explain quantum computing",
    system_instruction="You are a helpful physics tutor. Provide clear explanations."
)
print(response.content)

Async Usage

import asyncio
from neo4j_litellm import LiteLLMInterface

async def main():
    llm = LiteLLMInterface(
        provider="openai",
        model_name="gpt-3.5-turbo",
        base_url="https://api.openai.com/v1",
        api_key="your-api-key-here"
    )

    response = await llm.ainvoke("Hello from async!")
    print(response.content)

# Run async function
asyncio.run(main())

API Reference

LiteLLMInterface Class

Constructor

LiteLLMInterface(provider: str, model_name: str, base_url: str, api_key: str)
  • provider: LLM provider name (e.g., “openai”, “anthropic”, “azure”, etc.)

  • model_name: Specific model name (e.g., “gpt-3.5-turbo”, “claude-3-sonnet”)

  • base_url: API endpoint URL

  • api_key: Authentication API key

Methods

invoke(input: str, message_history: Optional[List[ChatHistory]] = None, system_instruction: Optional[str] = None, timeout:int = 5) -> LLMResponse

Synchronous method to invoke the LLM.

  • input: User input text

  • message_history: Optional list of chat history messages

  • system_instruction: Optional system prompt

  • timeout: Value of timeout for the request to the llm.

  • Returns: LLMResponse object with content field

ainvoke(input: str, message_history: Optional[List[ChatHistory]] = None, system_instruction: Optional[str] = None, timeout:int = 5) -> LLMResponse

Asynchronous method to invoke the LLM.

  • Parameters same as invoke()

  • Returns: LLMResponse object with content field

ChatHistory Type

class ChatHistory(TypedDict):
    role: str    # "system", "assistant", or "user"
    content: str # Message content

Supported LLM Providers

This package supports all LLM providers supported by LiteLLM, including:

  • OpenAI

  • Anthropic

  • Azure OpenAI

  • Google AI (Gemini)

  • Cohere

  • Hugging Face

  • Dashscope

  • And many more…

Refer to the LiteLLM documentation for the complete list of supported providers.

Integration with Neo4j GraphRAG

This package implements the LLMInterface from neo4j_graphrag, making it compatible with Neo4j’s Graph RAG framework for building knowledge graph-powered retrieval-augmented generation applications. Here is an example of how to integrate the LiteLLM with Neo4j Graph RAG:

from neo4j import GraphDatabase
from neo4j_graphrag.retrievers import VectorRetriever
from neo4j_litellm import LiteLLMInterface
from neo4j_graphrag.generation import GraphRAG
from neo4j_graphrag.embeddings import OpenAIEmbeddings

# 1. Neo4j driver
URI = "neo4j://:7687"
AUTH = ("neo4j", "password")

INDEX_NAME = "index-name"

# Connect to Neo4j database
driver = GraphDatabase.driver(URI, auth=AUTH)

# 2. Retriever
# Create Embedder object, needed to convert the user question (text) to a vector
embedder = OpenAIEmbeddings(model="text-embedding-3-large")

# Initialize the retriever
retriever = VectorRetriever(driver, INDEX_NAME, embedder)

# 3. LLM
llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

# Initialize the RAG pipeline
rag = GraphRAG(retriever=retriever, llm=llm)

# Query the graph
query_text = "How do I do similarity search in Neo4j?"
response = rag.search(query_text=query_text, retriever_config={"top_k": 5})
print(response.answer)

License

MIT License

Author

1Vewton.zh-n (zhanyunze0601@gmail.com)

Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neo4j_litellm-0.0.5.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neo4j_litellm-0.0.5-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file neo4j_litellm-0.0.5.tar.gz.

File metadata

  • Download URL: neo4j_litellm-0.0.5.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for neo4j_litellm-0.0.5.tar.gz
Algorithm Hash digest
SHA256 0ed0b718ecf18ad0df2159c952e50b111f840f357996d16fc2a50d93b00f5653
MD5 be2ec15970f29e7d1635f914610116dc
BLAKE2b-256 ef8f2602fb4cc5477a3d56025854ffb36b217b156e10859614ff3f99255cb0c0

See more details on using hashes here.

File details

Details for the file neo4j_litellm-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: neo4j_litellm-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for neo4j_litellm-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ee058d83796997f103d50d97a7c2d69e45ae9616d735bd539850a7289a8d7e5c
MD5 8d8f00204079de760b8d3b76fadd4a49
BLAKE2b-256 fa7666d44b50955828bfd7291a89f74d21ef69ed21dd6ae9150c661e1f959e02

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page