Skip to main content

A unified framework for interacting with various LLM providers through a consistent interface with batch processing, templating, and rate limiting.

Project description

LLMPromptNexus

A unified framework for interacting with Large Language Models (LLMs) through a standardized interface. LLMPromptNexus simplifies working with multiple LLM providers while providing powerful templating and batching capabilities.

🚀 Quick Start

pip install llmprompt-nexus
from llmprompt_nexus import NexusManager

# Initialize with your API keys
llm = NexusManager({
    "openai": "your-openai-key",
    "perplexity": "your-perplexity-key"
})

# Simple translation example
result = await llm.run_with_model(
    input_data={
        "text": "Hello world",
        "source_language": "English",
        "target_language": "Spanish"
    },
    model_id="sonar-pro",
    template_name="translation"
)

🌟 Key Features

  • Multiple LLM Providers: Seamlessly work with OpenAI, Perplexity, and more through a single interface
  • Smart Template System: Pre-built and custom templates for common NLP tasks
  • Efficient Batch Processing: Handle large-scale operations with automatic rate limiting
  • Built-in Safety: Automatic retries, rate limiting, and error handling

📦 Installation

Using pip

pip install llmprompt-nexus

From source

git clone https://github.com/EEstevanell/llmprompt-nexus.git
cd llmprompt-nexus
pip install -e .

🔑 Configuration

  1. Set your API keys as environment variables:
export OPENAI_API_KEY="your-key"
export PERPLEXITY_API_KEY="your-key"
  1. Or provide them during initialization:
llm = NexusManager({
    "openai": "your-openai-key",
    "perplexity": "your-perplexity-key"
})

📘 Basic Usage

Using Built-in Templates

# Simple translation
result = await llm.run_with_model(
    input_data={
        "text": "Hello world",
        "source_language": "English",
        "target_language": "Spanish"
    },
    model_id="sonar-pro",
    template_name="translation"
)

# Text classification
result = await llm.run_with_model(
    input_data={
        "text": "I love this product!",
        "categories": ["positive", "negative", "neutral"]
    },
    model_id="sonar-pro",
    template_name="classification"
)

Using Custom Templates

Templates can be defined in two ways:

  1. Using YAML files:
templates:
  technical_qa:
    template: |
      Context: {context}
      Question: {question}
      Provide a technical answer based on the context.
    description: "Technical Q&A template"
    system_message: "You are a technical expert."
    required_variables: ["context", "question"]
  1. Using Python dictionaries:
custom_template = {
    "template": """
    Analyze the following {language} code:
    
    {code}
    
    Provide:
    - Code quality score (0-10)
    - Best practices followed
    - Suggested improvements
    """,
    "name": "code_review",  # Optional
    "description": "Template for code review",  # Optional
    "system_message": "You are an expert code reviewer.",  # Optional
    "required_variables": ["language", "code"]  # Optional
}

# Use the custom template
result = await llm.run_with_model(
    input_data={
        "language": "Python",
        "code": "def hello(): print('world')"
    },
    model_id="sonar-pro",
    template_config=custom_template  # Pass template directly
)

Batch Processing

texts = ["First text", "Second text", "Third text"]
batch_inputs = [
    {
        "text": text,
        "source_language": "English",
        "target_language": "Spanish"
    }
    for text in texts
]

results = await llm.run_batch_with_model(
    input_data=batch_inputs,
    model_id="sonar-pro",
    template_name="translation"
)

🎯 Built-in Templates

LLMPromptNexus comes with several built-in templates:

  • Translation: Convert text between languages
  • Classification: Categorize text into predefined groups
  • Intent Detection: Identify user intentions from text
  • Question Answering: Generate answers based on context
  • Summarization: Create concise text summaries

⚙️ Template Schema

Templates follow this schema:

templates:
  template_name:
    template: |  # Required: The actual template text with {variables}
      Your template content here with {variable1} and {variable2}
    description: "Template description"  # Optional
    system_message: "System prompt for the LLM"  # Optional
    required_variables: ["variable1", "variable2"]  # Optional

Required fields:

  • template: The template text with variables in {braces}

Optional fields:

  • name: Template identifier (auto-generated if using dictionary)
  • description: Brief description of the template's purpose
  • system_message: System prompt for the LLM
  • required_variables: List of required variable names

📚 Documentation

For detailed documentation, visit our documentation site.

🤝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on how to submit pull requests, report issues, and contribute to the project.

📄 License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This means you are free to:

  • Share and redistribute the material in any medium or format
  • Adapt, remix, and transform the material

Under these conditions:

  • Attribution — You must give appropriate credit when using this work, especially in academic research
  • NonCommercial — You may not use the material for commercial purposes

For the full license text, see the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmprompt_nexus-0.2.0.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmprompt_nexus-0.2.0-py3-none-any.whl (31.1 kB view details)

Uploaded Python 3

File details

Details for the file llmprompt_nexus-0.2.0.tar.gz.

File metadata

  • Download URL: llmprompt_nexus-0.2.0.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.13 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for llmprompt_nexus-0.2.0.tar.gz
Algorithm Hash digest
SHA256 8ef14124c436009758c07c85780f9751b9efc7edfce75c057cf52b33a8e20df5
MD5 72780e1350254b64be1c9fa78cf43927
BLAKE2b-256 7e60f007fd5577734cdb95fb4154787957ca082aeca70c9053d90530391f9b6f

See more details on using hashes here.

File details

Details for the file llmprompt_nexus-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: llmprompt_nexus-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 31.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.13 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for llmprompt_nexus-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 98116423017fa5d30c36cbe04e1e0381accda5b6ead02534856fba57a5813ac7
MD5 99bcca86c900b25db5756f3179f39f6e
BLAKE2b-256 34d306db92acaa9e4f4e5a547a21bcb4c4458007153d87ec8b6f0957f8e8daf6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page