Skip to main content

A robust, multi-model LLM calling package with intelligent context management, file processing, and advanced prompt handling

Project description

Nimble LLM Caller

A robust, multi-model LLM calling package with advanced prompt management, retry logic, and document assembly capabilities.

Features

  • Multi-Model Support: Call multiple LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM
  • Batch Processing: Submit multiple prompts to multiple models efficiently
  • Robust JSON Parsing: Multiple fallback strategies for parsing LLM responses
  • Retry Logic: Exponential backoff with jitter for handling rate limits and transient errors
  • Prompt Management: JSON-based prompt templates with variable substitution
  • Document Assembly: Built-in formatters for text, markdown, and LaTeX output
  • Reprompting Support: Use results from previous calls as context for new prompts
  • Secret Management: Secure handling of API keys via environment variables
  • Comprehensive Logging: Detailed logging for debugging and monitoring

Installation

From PyPI (when published)

pip install nimble-llm-caller

Development Installation

# Clone the repository
git clone https://github.com/fredzannarbor/nimble-llm-caller.git
cd nimble-llm-caller

# Install in development mode
pip install -e .

# Install with development dependencies
pip install -e .[dev]

# Run setup script
python setup_dev.py setup

Verify Installation

# Run the test CLI
python examples/cli_test.py

# Run specific tests
python examples/cli_test.py --test install

Quick Start

1. Set up API Keys

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"

2. Basic Usage

from nimble_llm_caller import LLMContentGenerator

# Initialize with your prompts file
generator = LLMContentGenerator("examples/sample_prompts.json")

# Simple single prompt call
result = generator.call_single(
    prompt_key="summarize_text",
    model="gpt-4o",
    substitutions={"text": "Your text here"}
)

print(f"Result: {result.content}")

3. Batch Processing

# Batch processing multiple prompts
results = generator.call_batch(
    prompt_keys=["summarize_text", "extract_keywords", "generate_title"],
    models=["gpt-4o", "claude-3-sonnet"],
    shared_substitutions={"content": "Your content here"}
)

print(f"Success rate: {results.success_rate:.1f}%")

4. Document Assembly

# Assemble results into a document
document = generator.assemble_document(
    results, 
    format="markdown",
    output_filename="report.md"
)

print(f"Document created: {document.word_count} words")

Configuration

Set your API keys in environment variables:

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"

Prompt Format

Prompts are stored in JSON files with this structure:

{
  "prompt_keys": ["summarize_text", "extract_keywords"],
  "summarize_text": {
    "messages": [
      {
        "role": "system",
        "content": "You are a professional summarizer."
      },
      {
        "role": "user", 
        "content": "Summarize this text: {text}"
      }
    ],
    "params": {
      "temperature": 0.3,
      "max_tokens": 1000
    }
  }
}

Advanced Usage

See the documentation for advanced features including:

  • Custom retry strategies
  • Document templates
  • Reprompting workflows
  • Error handling
  • Performance optimization

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nimble_llm_caller-0.2.1.tar.gz (78.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nimble_llm_caller-0.2.1-py3-none-any.whl (83.7 kB view details)

Uploaded Python 3

File details

Details for the file nimble_llm_caller-0.2.1.tar.gz.

File metadata

  • Download URL: nimble_llm_caller-0.2.1.tar.gz
  • Upload date:
  • Size: 78.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for nimble_llm_caller-0.2.1.tar.gz
Algorithm Hash digest
SHA256 f8799ceb748b647baff4c2d3be1ccb98d88ecfc78e2ca1cda5abdebf8a713f1f
MD5 d2cc78dd6785a9312d0e2f56d3aa22ac
BLAKE2b-256 185f50ec6bbd57750c8c37faeef826a2b9d13496a01d38b0e07797fd6b870225

See more details on using hashes here.

File details

Details for the file nimble_llm_caller-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for nimble_llm_caller-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e1e955506e8f3e98d1d17a830c4fd3dd521fb04095ac3f938a75d9df69359874
MD5 218f441dbda2edf9d2bc9541bd708934
BLAKE2b-256 41c7abdcabb6c0572e24c07cae4871ce2ad7e4e2fbcaf64b059511f7740651cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page