Skip to main content

GraphRAG LLM package.

Project description

GraphRAG LLM

Basic Completion

import os
from collections.abc import AsyncIterator, Iterator

from dotenv import load_dotenv
from graphrag_llm.completion import LLMCompletion, create_completion
from graphrag_llm.config import AuthMethod, ModelConfig
from graphrag_llm.types import LLMCompletionChunk, LLMCompletionResponse
from graphrag_llm.utils import (
    gather_completion_response,
    gather_completion_response_async,
)

load_dotenv()

api_key = os.getenv("GRAPHRAG_API_KEY")
model_config = ModelConfig(
    model_provider="azure",
    model=os.getenv("GRAPHRAG_MODEL", "gpt-4o"),
    azure_deployment_name=os.getenv("GRAPHRAG_MODEL", "gpt-4o"),
    api_base=os.getenv("GRAPHRAG_API_BASE"),
    api_version=os.getenv("GRAPHRAG_API_VERSION", "2025-04-01-preview"),
    api_key=api_key,
    auth_method=AuthMethod.AzureManagedIdentity if not api_key else AuthMethod.ApiKey,
)
llm_completion: LLMCompletion = create_completion(model_config)

response: LLMCompletionResponse | Iterator[LLMCompletionChunk] = (
    llm_completion.completion(
        messages="What is the capital of France?",
    )
)

if isinstance(response, Iterator):
    # Streaming response
    for chunk in response:
        print(chunk.choices[0].delta.content or "", end="", flush=True)
else:
    # Non-streaming response
    print(response.choices[0].message.content)

# Alternatively, you can use the utility function to gather the full response
# The following is equivalent to the above logic. If all you care about is
# the first choice response then you can use the gather_completion_response
# utility function.
response_text = gather_completion_response(response)
print(response_text)

Basic Embedding

from graphrag_llm.embedding import LLMEmbedding, create_embedding
from graphrag_llm.types import LLMEmbeddingResponse
from graphrag_llm.utils import gather_embeddings

embedding_config = ModelConfig(
    model_provider="azure",
    model=os.getenv("GRAPHRAG_EMBEDDING_MODEL", "text-embedding-3-small"),
    azure_deployment_name=os.getenv(
        "GRAPHRAG_LLM_EMBEDDING_MODEL", "text-embedding-3-small"
    ),
    api_base=os.getenv("GRAPHRAG_API_BASE"),
    api_version=os.getenv("GRAPHRAG_API_VERSION", "2025-04-01-preview"),
    api_key=api_key,
    auth_method=AuthMethod.AzureManagedIdentity if not api_key else AuthMethod.ApiKey,
)

llm_embedding: LLMEmbedding = create_embedding(embedding_config)

embeddings_batch: LLMEmbeddingResponse = llm_embedding.embedding(
    input=["Hello world", "How are you?"]
)
for data in embeddings_batch.data:
    print(data.embedding[0:3])

# OR
batch = gather_embeddings(embeddings_batch)
for embedding in batch:
    print(embedding[0:3])

View the notebooks for more examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphrag_llm-3.0.0.tar.gz (62.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

graphrag_llm-3.0.0-py3-none-any.whl (84.4 kB view details)

Uploaded Python 3

File details

Details for the file graphrag_llm-3.0.0.tar.gz.

File metadata

  • Download URL: graphrag_llm-3.0.0.tar.gz
  • Upload date:
  • Size: 62.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.4

File hashes

Hashes for graphrag_llm-3.0.0.tar.gz
Algorithm Hash digest
SHA256 52cdb4d3cee1be59307dd5c7fc699f90922c1924d8ae61e619392a78a277a6a8
MD5 2fd1f442163bb36cf37dcf9fd75b8f8a
BLAKE2b-256 7ecd14f76733f56cfb45176a274d7d0b7bf37c70bc57122c2ec57b3013d22a4a

See more details on using hashes here.

File details

Details for the file graphrag_llm-3.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for graphrag_llm-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0b43d53cbb68dca7ebdf9d2c6589d268ddb9b8a068e9fe0a2cc2ebe95d309081
MD5 7dd98d0159c8c379cc86582f3d4878dd
BLAKE2b-256 5c776fc68650ba0673121549417876166ddc40f7797902be97e9bd5429b556d9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page