Skip to main content

A LiteLLM LLM component for Neo4j graph RAG.

Project description

A LiteLLM LLM component for Neo4j Graph RAG (Retrieval-Augmented Generation) system.

Overview

neo4j_litellm is a Python package that provides a unified interface for integrating various Large Language Models (LLMs) with Neo4j Graph RAG framework using the LiteLLM library. It supports both synchronous and asynchronous model invocations with chat history and system instructions.

Features

  • Unified LLM Interface: Compatible with multiple LLM providers through LiteLLM

  • Neo4j GraphRAG Integration: Implements the LLMInterface from neo4j_graphrag

  • Sync & Async Support: Both invoke() and ainvoke() methods available

  • Chat History Support: Maintain conversation context with message history

  • System Instructions: Support for system prompts and instructions

  • Flexible Configuration: Configurable provider, model, API endpoints, and keys

Installation

pip install neo4j_litellm

Dependencies

  • litellm>=1.77.5 - Unified LLM interface library

  • neo4j_graphrag>=1.9.0 - Neo4j Graph RAG framework

Quick Start

Basic Usage

from neo4j_litellm import LiteLLMInterface, ChatHistory

# Initialize the LLM interface
llm = LiteLLMInterface(
    provider="openai",        # LLM provider (e.g., openai, anthropic, azure, etc.)
    model_name="gpt-3.5-turbo",  # Model name
    base_url="https://api.openai.com/v1",  # API base URL
    api_key="your-api-key-here"  # API key
)

# Simple invocation
response = llm.invoke("Hello, how are you?")
print(response.content)

With Chat History

from neo4j_litellm import LiteLLMInterface, ChatHistory

llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

# Create chat history
message_history: List[ChatHistory] = [
    {"role": "user", "content": "What's the capital of France?"},
    {"role": "assistant", "content": "The capital of France is Paris."}
]

# Invoke with chat history
response = llm.invoke(
    input="Tell me more about Paris",
    message_history=message_history
)
print(response.content)

With System Instruction

llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

response = llm.invoke(
    input="Explain quantum computing",
    system_instruction="You are a helpful physics tutor. Provide clear explanations."
)
print(response.content)

Async Usage

import asyncio
from neo4j_litellm import LiteLLMInterface

async def main():
    llm = LiteLLMInterface(
        provider="openai",
        model_name="gpt-3.5-turbo",
        base_url="https://api.openai.com/v1",
        api_key="your-api-key-here"
    )

    response = await llm.ainvoke("Hello from async!")
    print(response.content)

# Run async function
asyncio.run(main())

API Reference

LiteLLMInterface Class

Constructor

LiteLLMInterface(provider: str, model_name: str, base_url: str, api_key: str)
  • provider: LLM provider name (e.g., “openai”, “anthropic”, “azure”, etc.)

  • model_name: Specific model name (e.g., “gpt-3.5-turbo”, “claude-3-sonnet”)

  • base_url: API endpoint URL

  • api_key: Authentication API key

Methods

invoke(input: str, message_history: Optional[List[ChatHistory]] = None, system_instruction: Optional[str] = None) -> LLMResponse

Synchronous method to invoke the LLM.

  • input: User input text

  • message_history: Optional list of chat history messages

  • system_instruction: Optional system prompt

  • Returns: LLMResponse object with content field

ainvoke(input: str, message_history: Optional[List[ChatHistory]] = None, system_instruction: Optional[str] = None) -> LLMResponse

Asynchronous method to invoke the LLM.

  • Parameters same as invoke()

  • Returns: LLMResponse object with content field

ChatHistory Type

class ChatHistory(TypedDict):
    role: str    # "system", "assistant", or "user"
    content: str # Message content

Supported LLM Providers

This package supports all LLM providers supported by LiteLLM, including:

  • OpenAI

  • Anthropic

  • Azure OpenAI

  • Google AI (Gemini)

  • Cohere

  • Hugging Face

  • Dashscope

  • And many more…

Refer to the LiteLLM documentation for the complete list of supported providers.

Integration with Neo4j GraphRAG

This package implements the LLMInterface from neo4j_graphrag, making it compatible with Neo4j’s Graph RAG framework for building knowledge graph-powered retrieval-augmented generation applications. Here is an example of how to integrate the LiteLLM with Neo4j Graph RAG:

from neo4j import GraphDatabase
from neo4j_graphrag.retrievers import VectorRetriever
from neo4j_litellm import LiteLLMInterface
from neo4j_graphrag.generation import GraphRAG
from neo4j_graphrag.embeddings import OpenAIEmbeddings

# 1. Neo4j driver
URI = "neo4j://:7687"
AUTH = ("neo4j", "password")

INDEX_NAME = "index-name"

# Connect to Neo4j database
driver = GraphDatabase.driver(URI, auth=AUTH)

# 2. Retriever
# Create Embedder object, needed to convert the user question (text) to a vector
embedder = OpenAIEmbeddings(model="text-embedding-3-large")

# Initialize the retriever
retriever = VectorRetriever(driver, INDEX_NAME, embedder)

# 3. LLM
# Note: the OPENAI_API_KEY must be in the env vars
llm = LiteLLMInterface(
    provider="openai",
    model_name="gpt-3.5-turbo",
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)

# Initialize the RAG pipeline
rag = GraphRAG(retriever=retriever, llm=llm)

# Query the graph
query_text = "How do I do similarity search in Neo4j?"
response = rag.search(query_text=query_text, retriever_config={"top_k": 5})
print(response.answer)

License

MIT License

Author

1Vewton.zh-n (zhanyunze0601@gmail.com)

Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neo4j_litellm-0.0.2.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neo4j_litellm-0.0.2-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file neo4j_litellm-0.0.2.tar.gz.

File metadata

  • Download URL: neo4j_litellm-0.0.2.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for neo4j_litellm-0.0.2.tar.gz
Algorithm Hash digest
SHA256 e9214ee8da31384dcf30fd1a1ab156c7e54631d7c48ea09bcc0adde74251b807
MD5 37a1d3b990fa549d5408a5a557086845
BLAKE2b-256 0d3cb800bac3403a907a6d49a8740163f17282446093fcb3bb6405f5dca860e9

See more details on using hashes here.

File details

Details for the file neo4j_litellm-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: neo4j_litellm-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for neo4j_litellm-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1d149f0cb4b2f2ecefc82b0abb8d980203b7d7e1e4e8f6f1cde68b244db240f2
MD5 faa2e8e4f97f8d432d9d54d109314bf5
BLAKE2b-256 4a360e45278cae967dcc46ea67edfd5a3afc1ac24470ad1d253f5ae20b947977

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page