Skip to main content

llama-index graph rag cognee integration

Project description

LlamaIndex Graph Rag Integration: Cognee

Cognee assists developers in introducing greater predictability and management into their Retrieval-Augmented Generation (RAG) workflows through the use of graph architectures, vector stores, and auto-optimizing pipelines. Displaying information as a graph is the clearest way to grasp the content of your documents. Crucially, graphs allow systematic navigation and extraction of data from documents based on their hierarchy.

For more information, visit Cognee documentation

Installation

pip install llama-index-graph-rag-cognee

Usage

import os
import pandas as pd
import asyncio

from llama_index.core import Document
from llama_index.graph_rag.cognee import CogneeGraphRAG


async def example_graph_rag_cognee():
    # Gather documents to add to GraphRAG
    news = pd.read_csv(
        "https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/news_articles.csv"
    )[:5]
    news.head()
    documents = [
        Document(text=f"{row['title']}: {row['text']}")
        for i, row in news.iterrows()
    ]

    # Instantiate cognee GraphRAG
    cogneeRAG = CogneeGraphRAG(
        llm_api_key=os.environ["OPENAI_API_KEY"],
        llm_provider="openai",
        llm_model="gpt-4o-mini",
        graph_db_provider="networkx",
        vector_db_provider="lancedb",
        relational_db_provider="sqlite",
        relational_db_name="cognee_db",
    )

    # Add data to cognee
    await cogneeRAG.add(documents, "test")

    # Process data into a knowledge graph
    await cogneeRAG.process_data("test")

    # Answer prompt based on knowledge graph
    search_results = await cogneeRAG.search(
        "Tell me who are the people mentioned?"
    )
    print("\n\nAnswer based on knowledge graph:\n")
    for result in search_results:
        print(f"{result}\n")

    # Answer prompt based on RAG
    search_results = await cogneeRAG.rag_search(
        "Tell me who are the people mentioned?"
    )
    print("\n\nAnswer based on RAG:\n")
    for result in search_results:
        print(f"{result}\n")

    # Search for related nodes in graph
    search_results = await cogneeRAG.get_related_nodes("person")
    print("\n\nRelated nodes are:\n")
    for result in search_results:
        print(f"{result}\n")


if __name__ == "__main__":
    asyncio.run(example_graph_rag_cognee())

Supported databases

Relational databases: SQLite, PostgreSQL

Vector databases: LanceDB, PGVector, QDrant, Weviate

Graph databases: Neo4j, NetworkX

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_graph_rag_cognee-0.2.0.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_graph_rag_cognee-0.2.0-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_graph_rag_cognee-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_graph_rag_cognee-0.2.0.tar.gz
Algorithm Hash digest
SHA256 692fd9436483988aaf594bc93842fe9720e8578e6e39f4a1506531f543fb4f5f
MD5 2361e88d40b19ebd47bee3abadafbb51
BLAKE2b-256 44ec70068b653b58b7cce8ab4728f8460cc2615c890d4edad1c74150ce5ff6c8

See more details on using hashes here.

File details

Details for the file llama_index_graph_rag_cognee-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_graph_rag_cognee-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 af444b1d819e80b3981c53cebe14e3a37b9e0bd98e2db8ceb83825a15affb236
MD5 1261fd864a06771a818f924e875dc9a1
BLAKE2b-256 54075dab6601cd9d111a0ea917d54d4b3d71cb0ce40868afe42cbd34aba015e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page