Skip to main content

Multi-provider AI dispatcher with response caching and error handling.

Project description

cross-ai-core

PyPI version Python License: MIT

Multi-provider AI dispatcher with MD5-keyed response caching and unified error handling.

Supports Anthropic, xAI (Grok), OpenAI, Google Gemini, and Perplexity through a single consistent interface.

Requirements

  • Python 3.10 or newer (3.11 recommended for development)
  • No upper version limit — tested on 3.10–3.13

Install

Install only the provider(s) you need:

pip install "cross-ai-core[anthropic]"   # Claude
pip install "cross-ai-core[gemini]"      # Google Gemini
pip install "cross-ai-core[openai]"      # OpenAI (ChatGPT)
pip install "cross-ai-core[xai]"         # xAI Grok  (uses the OpenAI SDK)
pip install cross-ai-core                # Perplexity only (uses requests, no extra SDK)

Install all providers at once (used by cross, which runs all 5 simultaneously):

pip install "cross-ai-core[all]"

Dependencies

requests is always installed — it is used for the Perplexity provider and general HTTP.
The three provider SDKs are optional extras; pip installs only what you request.

Extra Package Version Providers covered
(base) requests ≥2.32.4 Perplexity
[anthropic] anthropic ≥0.84.0 Anthropic / Claude
[gemini] google-genai ≥1.65.0 Google Gemini
[openai] openai ≥1.70.0 OpenAI
[xai] openai ≥1.70.0 xAI / Grok (OpenAI-compatible API)
[all] all three above All 5 providers

Quick start

import os
from dotenv import load_dotenv
load_dotenv(os.path.expanduser("~/.crossenv"))  # your app loads keys; the library reads os.environ

from cross_ai_core import process_prompt, get_content, get_default_ai

provider = get_default_ai()         # reads DEFAULT_AI from env, falls back to "xai"
result   = process_prompt(
    provider,
    "Explain transformer attention in 3 sentences.",
    system="You are a concise technical writer.",   # omit to use each provider's default
    verbose=False,
    use_cache=True,
)
print(get_content(provider, result.response))

Configuration (environment variables)

Variable Default Purpose
DEFAULT_AI xai Default provider when none is specified
XAI_API_KEY xAI / Grok API key
ANTHROPIC_API_KEY Anthropic / Claude API key
OPENAI_API_KEY OpenAI API key
GEMINI_API_KEY Google Gemini API key
PERPLEXITY_API_KEY Perplexity API key
CROSS_API_CACHE_DIR ~/.cross_api_cache/ Response cache directory
CROSS_NO_CACHE Set to 1 to disable caching globally

The library only reads from os.environ — it never calls load_dotenv() itself.
Load your .env or ~/.crossenv before importing.
You only need to set API keys for the providers you actually use.

Caching

Responses are cached by MD5 hash of the request payload in ~/.cross_api_cache/.
The cache is safe to delete at any time.

# Bypass cache for one call
result = process_prompt(provider, prompt, verbose=False, use_cache=False)

# Check if a response was served from cache
if result.was_cached:
    print("from cache")

Development

cd ~/github/cross-ai-core
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"     # installs the package + pytest + pytest-mock

Run the test suite:

python -m pytest tests/ -v

Tests use mocks — no real API keys required.

Note: Keep each repo's .venv separate; do not share it with dependent projects.

Adding a provider

  1. Create cross_ai_core/ai_<name>.py implementing BaseAIHandler (get_payload, get_client, get_cached_response, get_model, get_make, get_content, put_content, get_data_content, get_title, get_usage).
  2. Register in cross_ai_core/ai_handler.py: add to AI_HANDLER_REGISTRY and AI_LIST.

Documentation

  • API reference — all public functions, AIResponse, parallel calls, error handling
  • Providers — per-provider guide: models, API keys, strengths, free tiers
  • Changelog

License

MIT — free for personal, academic, and open-source use.
See COMMERCIAL_LICENSE.md for organizational and commercial use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cross_ai_core-0.4.0.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cross_ai_core-0.4.0-py3-none-any.whl (23.5 kB view details)

Uploaded Python 3

File details

Details for the file cross_ai_core-0.4.0.tar.gz.

File metadata

  • Download URL: cross_ai_core-0.4.0.tar.gz
  • Upload date:
  • Size: 24.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for cross_ai_core-0.4.0.tar.gz
Algorithm Hash digest
SHA256 303a96f3580e69ad99658b2d8044e09cd0187dab91b38bdc92f81877edbc6af4
MD5 3e9476d6cf482c75e1d3522a49b69306
BLAKE2b-256 f0138193083e9b198f4189ecef1f91d53e607da8c9f458d7d15de052835cb00f

See more details on using hashes here.

File details

Details for the file cross_ai_core-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: cross_ai_core-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 23.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for cross_ai_core-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 052f077dfd56aa477997e9836bc16b5c4fb4b444ff071fa9c7afc64ac616111d
MD5 6b615ab325045f17e26f96330a362f2d
BLAKE2b-256 6f770bb785355da7355833b48d62fdc48b2c7960a27a311914d5bd4b5ba45641

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page