Skip to main content

Shared helper for calling LLM providers across the ARP/JARVIS stack.

Project description

ARP LLM (arp-llm)

Shared helper for calling LLM providers across the ARP/JARVIS stack.

This package provides:

  • ChatModel.response(...) (chat/text + optional JSON Schema structured output)
  • Embedder.embed(...) (embeddings)
  • load_chat_model_from_env(...) / load_embedder_from_env(...) profile-based configuration

Install

pip install arp-llm

Quickstart (dev mock; no network)

import asyncio

from arp_llm import Message, load_chat_model_from_env_or_dev_mock

async def main() -> None:
    model = load_chat_model_from_env_or_dev_mock()
    resp = await model.response([Message.user("hello")])
    print(resp.text)

asyncio.run(main())

Configuration (OpenAI)

export ARP_LLM_PROFILE=openai
export ARP_LLM_API_KEY=...
export ARP_LLM_CHAT_MODEL=gpt-4.1-mini
export ARP_LLM_BASE_URL=https://api.openai.com

API

  • ChatModel.response(messages, *, response_schema=None, temperature=None, timeout_seconds=None, metadata=None) -> Response
    • If response_schema is provided, Response.parsed will be a JSON-like dict.
  • Embedder.embed(texts, *, timeout_seconds=None, metadata=None) -> EmbeddingResponse

Direct construction (advanced)

The load_*_from_env*() helpers are optional. For multi-provider routing/fallback inside a single process, construct provider clients directly (and route per call), for example:

from arp_llm.providers.openai import OpenAIChatModel

model = OpenAIChatModel(model="gpt-4.1-mini", api_key="...", base_url="https://api.openai.com")

See Business_Docs/JARVIS/LLMProvider/HLD.md and Business_Docs/JARVIS/LLMProvider/LLD.md for the design intent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arp_llm-0.1.0.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arp_llm-0.1.0-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file arp_llm-0.1.0.tar.gz.

File metadata

  • Download URL: arp_llm-0.1.0.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for arp_llm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 aa03448c1d30689b00bbaf1996fd8084d1becc4592ede326287bd53ec2352b7e
MD5 9a1e29b7dc077a1af723945e9b8f7e20
BLAKE2b-256 d5c036116a06e08ac1ba3ae696485fc639b52b6de3538ea9e05760115ae2e919

See more details on using hashes here.

Provenance

The following attestation bundles were made for arp_llm-0.1.0.tar.gz:

Publisher: release.yml on AgentRuntimeProtocol/ARP_LLM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arp_llm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: arp_llm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for arp_llm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8232330becf826fbce03476784b12ddd3f926c2fc19be62634aaa9228411c610
MD5 21d4201e71a53ab33e619ebac9df434a
BLAKE2b-256 97dcb469d5a0827c0c8261b4a34e660d3af17bdc7edd14a7c9865ff48b2042db

See more details on using hashes here.

Provenance

The following attestation bundles were made for arp_llm-0.1.0-py3-none-any.whl:

Publisher: release.yml on AgentRuntimeProtocol/ARP_LLM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page