Skip to main content

Shared helper for calling LLM providers across the ARP/JARVIS stack.

Project description

ARP LLM (arp-llm)

Shared helper for calling LLM providers across the ARP/JARVIS stack.

This package provides:

  • ChatModel.response(...) (chat/text + optional JSON Schema structured output)
  • Embedder.embed(...) (embeddings)
  • load_chat_model_from_env(...) / load_embedder_from_env(...) profile-based configuration

Install

pip install arp-llm

Quickstart (OpenAI default)

export ARP_LLM_API_KEY=...
export ARP_LLM_CHAT_MODEL=gpt-4.1-mini
# Optional (OpenAI is the default profile):
# export ARP_LLM_PROFILE=openai
import asyncio

from arp_llm import Message, load_chat_model_from_env

async def main() -> None:
    model = load_chat_model_from_env()
    resp = await model.response([Message.user("hello")])
    print(resp.text)

asyncio.run(main())

Dev mock (optional; no network)

export ARP_LLM_PROFILE=dev-mock
# Optional: provide deterministic fixtures
# export ARP_LLM_DEV_MOCK_FIXTURES_PATH=./fixtures.json

Configuration (OpenAI)

# ARP_LLM_PROFILE=openai is optional (default)
export ARP_LLM_API_KEY=...
export ARP_LLM_CHAT_MODEL=gpt-4.1-mini
# Optional overrides:
export ARP_LLM_BASE_URL=https://api.openai.com

API

  • ChatModel.response(messages, *, response_schema=None, temperature=None, timeout_seconds=None, metadata=None) -> Response
    • If response_schema is provided, Response.parsed will be a JSON-like dict.
  • Embedder.embed(texts, *, timeout_seconds=None, metadata=None) -> EmbeddingResponse

Direct construction (advanced)

The load_*_from_env*() helpers are optional. For multi-provider routing/fallback inside a single process, construct provider clients directly (and route per call), for example:

from arp_llm.providers.openai import OpenAIChatModel

model = OpenAIChatModel(model="gpt-4.1-mini", api_key="...", base_url="https://api.openai.com")

See https://github.com/AgentRuntimeProtocol/BusinessDocs/blob/main/Business_Docs/JARVIS/LLMProvider/HLD.md and https://github.com/AgentRuntimeProtocol/BusinessDocs/blob/main/Business_Docs/JARVIS/LLMProvider/LLD.md for the design intent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arp_llm-0.1.1.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arp_llm-0.1.1-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file arp_llm-0.1.1.tar.gz.

File metadata

  • Download URL: arp_llm-0.1.1.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for arp_llm-0.1.1.tar.gz
Algorithm Hash digest
SHA256 5c337872af07a4a2560c2b7f3eefaa4a4fe54806c1405b4abfcfc88c1a436676
MD5 02e4205314cc69b9e5aa43a75eb2dd11
BLAKE2b-256 2369cf350eeebce88f327d3796ca10de8ff8db09d9c15263025549778f4d2a50

See more details on using hashes here.

Provenance

The following attestation bundles were made for arp_llm-0.1.1.tar.gz:

Publisher: release.yml on AgentRuntimeProtocol/ARP_LLM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arp_llm-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: arp_llm-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for arp_llm-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 000876e0a35901592748b45df6d4da42a80a760a815b2efdc6d69379a7e30cf0
MD5 6dd124109e9d93794de3e658919614d1
BLAKE2b-256 4b333ba8e31914f4a2d81c00034550b54c74a706f4ca0d86a76b109cc5ee6eb2

See more details on using hashes here.

Provenance

The following attestation bundles were made for arp_llm-0.1.1-py3-none-any.whl:

Publisher: release.yml on AgentRuntimeProtocol/ARP_LLM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page