Skip to main content

GraphRAG LLM package.

Project description

GraphRAG LLM

Basic Completion

import os
from collections.abc import AsyncIterator, Iterator

from dotenv import load_dotenv
from graphrag_llm.completion import LLMCompletion, create_completion
from graphrag_llm.config import AuthMethod, ModelConfig
from graphrag_llm.types import LLMCompletionChunk, LLMCompletionResponse
from graphrag_llm.utils import (
    gather_completion_response,
    gather_completion_response_async,
)

load_dotenv()

api_key = os.getenv("GRAPHRAG_API_KEY")
model_config = ModelConfig(
    model_provider="azure",
    model=os.getenv("GRAPHRAG_MODEL", "gpt-4o"),
    azure_deployment_name=os.getenv("GRAPHRAG_MODEL", "gpt-4o"),
    api_base=os.getenv("GRAPHRAG_API_BASE"),
    api_version=os.getenv("GRAPHRAG_API_VERSION", "2025-04-01-preview"),
    api_key=api_key,
    auth_method=AuthMethod.AzureManagedIdentity if not api_key else AuthMethod.ApiKey,
)
llm_completion: LLMCompletion = create_completion(model_config)

response: LLMCompletionResponse | Iterator[LLMCompletionChunk] = (
    llm_completion.completion(
        messages="What is the capital of France?",
    )
)

if isinstance(response, Iterator):
    # Streaming response
    for chunk in response:
        print(chunk.choices[0].delta.content or "", end="", flush=True)
else:
    # Non-streaming response
    print(response.choices[0].message.content)

# Alternatively, you can use the utility function to gather the full response
# The following is equivalent to the above logic. If all you care about is
# the first choice response then you can use the gather_completion_response
# utility function.
response_text = gather_completion_response(response)
print(response_text)

Basic Embedding

from graphrag_llm.embedding import LLMEmbedding, create_embedding
from graphrag_llm.types import LLMEmbeddingResponse
from graphrag_llm.utils import gather_embeddings

embedding_config = ModelConfig(
    model_provider="azure",
    model=os.getenv("GRAPHRAG_EMBEDDING_MODEL", "text-embedding-3-small"),
    azure_deployment_name=os.getenv(
        "GRAPHRAG_LLM_EMBEDDING_MODEL", "text-embedding-3-small"
    ),
    api_base=os.getenv("GRAPHRAG_API_BASE"),
    api_version=os.getenv("GRAPHRAG_API_VERSION", "2025-04-01-preview"),
    api_key=api_key,
    auth_method=AuthMethod.AzureManagedIdentity if not api_key else AuthMethod.ApiKey,
)

llm_embedding: LLMEmbedding = create_embedding(embedding_config)

embeddings_batch: LLMEmbeddingResponse = llm_embedding.embedding(
    input=["Hello world", "How are you?"]
)
for data in embeddings_batch.data:
    print(data.embedding[0:3])

# OR
batch = gather_embeddings(embeddings_batch)
for embedding in batch:
    print(embedding[0:3])

View the notebooks for more examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphrag_llm-3.0.1.tar.gz (62.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

graphrag_llm-3.0.1-py3-none-any.whl (84.5 kB view details)

Uploaded Python 3

File details

Details for the file graphrag_llm-3.0.1.tar.gz.

File metadata

  • Download URL: graphrag_llm-3.0.1.tar.gz
  • Upload date:
  • Size: 62.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.4

File hashes

Hashes for graphrag_llm-3.0.1.tar.gz
Algorithm Hash digest
SHA256 4bacac523fda0b84c669ad87da1a4ee145d516b298af00be5a7a16e4a894da66
MD5 cafc615f676eec492973d29b50f40649
BLAKE2b-256 0da48dd42354ba78d4ce91d69b6ed9eca67e68e0158f93820ab0c632ddc71979

See more details on using hashes here.

File details

Details for the file graphrag_llm-3.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for graphrag_llm-3.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e45dc7c29b376a048108040b78ddcd56d770c2b0d02568c477fe6f585e45ca81
MD5 019401f1e4cfd518059b029d2f2e283e
BLAKE2b-256 9001701ae3c290cdee3aa9697673d9ea318dd6be618c8db9a1a1a7ee26d2cb47

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page