Skip to main content

A unified framework for interacting with various LLM providers through a consistent interface with batch processing, templating, and rate limiting.

Project description

LLMPromptNexus

A unified framework for interacting with Large Language Models (LLMs) through a standardized interface. LLMPromptNexus simplifies working with multiple LLM providers while providing powerful templating and batching capabilities.

🚀 Quick Start

pip install llmprompt-nexus
from llmprompt_nexus import NexusManager

# Initialize with your API keys
llm = NexusManager({
    "openai": "your-openai-key",
    "perplexity": "your-perplexity-key"
})

# Simple translation example
result = await llm.run_with_model(
    input_data={
        "text": "Hello world",
        "source_language": "English",
        "target_language": "Spanish"
    },
    model_id="sonar-pro",
    template_name="translation"
)

🌟 Key Features

  • Multiple LLM Providers: Seamlessly work with OpenAI, Perplexity, and more through a single interface
  • Smart Template System: Pre-built and custom templates for common NLP tasks
  • Efficient Batch Processing: Handle large-scale operations with automatic rate limiting
  • Built-in Safety: Automatic retries, rate limiting, and error handling

📦 Installation

Using pip

pip install llmprompt-nexus

From source

git clone https://github.com/EEstevanell/llmprompt-nexus.git
cd llmprompt-nexus
pip install -e .

🔑 Configuration

  1. Set your API keys as environment variables:
export OPENAI_API_KEY="your-key"
export PERPLEXITY_API_KEY="your-key"
  1. Or provide them during initialization:
llm = NexusManager({
    "openai": "your-openai-key",
    "perplexity": "your-perplexity-key"
})

📘 Basic Usage

Using Built-in Templates

# Simple translation
result = await llm.run_with_model(
    input_data={
        "text": "Hello world",
        "source_language": "English",
        "target_language": "Spanish"
    },
    model_id="sonar-pro",
    template_name="translation"
)

# Text classification
result = await llm.run_with_model(
    input_data={
        "text": "I love this product!",
        "categories": ["positive", "negative", "neutral"]
    },
    model_id="sonar-pro",
    template_name="classification"
)

Using Custom Templates

Templates can be defined in two ways:

  1. Using YAML files:
templates:
  technical_qa:
    template: |
      Context: {context}
      Question: {question}
      Provide a technical answer based on the context.
    description: "Technical Q&A template"
    system_message: "You are a technical expert."
    required_variables: ["context", "question"]
  1. Using Python dictionaries:
custom_template = {
    "template": """
    Analyze the following {language} code:
    
    {code}
    
    Provide:
    - Code quality score (0-10)
    - Best practices followed
    - Suggested improvements
    """,
    "name": "code_review",  # Optional
    "description": "Template for code review",  # Optional
    "system_message": "You are an expert code reviewer.",  # Optional
    "required_variables": ["language", "code"]  # Optional
}

# Use the custom template
result = await llm.run_with_model(
    input_data={
        "language": "Python",
        "code": "def hello(): print('world')"
    },
    model_id="sonar-pro",
    template_config=custom_template  # Pass template directly
)

Batch Processing

texts = ["First text", "Second text", "Third text"]
batch_inputs = [
    {
        "text": text,
        "source_language": "English",
        "target_language": "Spanish"
    }
    for text in texts
]

results = await llm.run_batch_with_model(
    input_data=batch_inputs,
    model_id="sonar-pro",
    template_name="translation"
)

🎯 Built-in Templates

LLMPromptNexus comes with several built-in templates:

  • Translation: Convert text between languages
  • Classification: Categorize text into predefined groups
  • Intent Detection: Identify user intentions from text
  • Question Answering: Generate answers based on context
  • Summarization: Create concise text summaries

⚙️ Template Schema

Templates follow this schema:

templates:
  template_name:
    template: |  # Required: The actual template text with {variables}
      Your template content here with {variable1} and {variable2}
    description: "Template description"  # Optional
    system_message: "System prompt for the LLM"  # Optional
    required_variables: ["variable1", "variable2"]  # Optional

Required fields:

  • template: The template text with variables in {braces}

Optional fields:

  • name: Template identifier (auto-generated if using dictionary)
  • description: Brief description of the template's purpose
  • system_message: System prompt for the LLM
  • required_variables: List of required variable names

📚 Documentation

For detailed documentation, visit our documentation site.

🤝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on how to submit pull requests, report issues, and contribute to the project.

📄 License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This means you are free to:

  • Share and redistribute the material in any medium or format
  • Adapt, remix, and transform the material

Under these conditions:

  • Attribution — You must give appropriate credit when using this work, especially in academic research
  • NonCommercial — You may not use the material for commercial purposes

For the full license text, see the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmprompt_nexus-0.1.0.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmprompt_nexus-0.1.0-py3-none-any.whl (30.9 kB view details)

Uploaded Python 3

File details

Details for the file llmprompt_nexus-0.1.0.tar.gz.

File metadata

  • Download URL: llmprompt_nexus-0.1.0.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.13 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for llmprompt_nexus-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9733fdf92950666b42b80f89b61132c2eacd23f2028e6b911d1c90c77450716e
MD5 4c18d4e865dd587797aaf94c260c447e
BLAKE2b-256 5cab05ac36a192dea3c6d431bc3b6243c5c197dc821a9153f47d6649c5824622

See more details on using hashes here.

File details

Details for the file llmprompt_nexus-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmprompt_nexus-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.13 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for llmprompt_nexus-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f52b3ab8728bd74a0c0668ae22d4dfa4005c67cbe1edf00d157126a438d0f467
MD5 0eb5093d27170ee2efee54f0bdd08a49
BLAKE2b-256 2f8916faaf804272989460948fa5a8c06ab24a835a86146837d72db9c9e4b504

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page