Skip to main content

LLM Client for chat responses with support for different clients and models.

Project description

Modular, backend-agnostic interface for interacting with large language models (LLMs).

This library is part of the darca-* ecosystem and provides a plug-and-play, extensible interface to communicate with LLM providers like OpenAI. It is designed with testability, structure, and future integration in mind.

Build Status Deploy Status Codecov Black code style License PyPi GitHub Pages

Features

  • ✅ Unified AIClient interface for all LLMs

  • 🔌 OpenAI integration out of the box (GPT-4, GPT-3.5)

  • 🧱 Extensible abstract interface (BaseLLMClient) for new providers

  • 🧪 Full pytest support with 100% coverage

  • 📦 Rich exception handling with structured DarcaException

  • 📋 Markdown-aware content formatting using _strip_markdown_prefix

  • 🧠 Logging support via darca-log-facility

Quickstart

  1. Install dependencies:

make install
  1. Run all quality checks (format, test, docs):

make check
  1. Run tests only:

make test
  1. Use the client:

from darca_llm import AIClient

ai = AIClient()
response = ai.get_raw_response(
    system="You are a helpful assistant.",
    user="What is a Python decorator?"
)
print(response)

Using get_file_content_response

The get_file_content_response() method allows for structured file content prompting with LLMs.

Example:

from darca_llm import AIClient

client = AIClient()

user_prompt = "Provide the content of a simple Python file."

result = client.get_file_content_response(
    system="Explain the code.",
    user=user_prompt
)
print(result)

This method ensures that only a single code block is returned and properly stripped of formatting using _strip_markdown_prefix().

Error Handling

All exceptions are subclasses of DarcaException and include:

  • LLMException: Base for all LLM-specific errors

  • LLMAPIKeyMissing: Raised when the API key is missing for the selected backend

  • LLMContentFormatError: Raised when: - Multiple blocks are detected within the response - The response cannot be properly stripped of markdown/code block formatting

  • LLMResponseError: Raised when the LLM provider returns an error or response parsing fails

All exceptions include:

  • error_code

  • message

  • Optional metadata

  • Optional cause

  • Full stack trace logging

Documentation

Build and view the docs:

make docs

Open the HTML documentation at:

docs/build/html/index.html

For detailed usage, refer to the usage.rst documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

darca_llm-0.2.0.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

darca_llm-0.2.0-py3-none-any.whl (7.3 kB view details)

Uploaded Python 3

File details

Details for the file darca_llm-0.2.0.tar.gz.

File metadata

  • Download URL: darca_llm-0.2.0.tar.gz
  • Upload date:
  • Size: 6.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for darca_llm-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ac90ddd2eb62354c41fb02780d9a13fde0bd44556b2c01d8aae29ac22400d9c0
MD5 3b6e446fdc2942c246557716b8e5f0ca
BLAKE2b-256 31b25875442c7b1355f7357cedb1b701037ff0156dea4abdb11d4a011a4354b4

See more details on using hashes here.

Provenance

The following attestation bundles were made for darca_llm-0.2.0.tar.gz:

Publisher: cd.yml on roelkist/darca-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file darca_llm-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: darca_llm-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 7.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for darca_llm-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 82b57a6ce1b2e061c6d4c048491192761659fa8aa302592fe63b21705546b3fc
MD5 7c2139717bf05b6e0ac4100f511309be
BLAKE2b-256 36849d628e1005e14488371cdc92566310dc652c63914e8278614e1337a896db

See more details on using hashes here.

Provenance

The following attestation bundles were made for darca_llm-0.2.0-py3-none-any.whl:

Publisher: cd.yml on roelkist/darca-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page