Skip to main content

A Unified Interface for Diverse LLMs

Project description

MODIHUB: A Unified Interface for Diverse LLMs

MODIHUB simplifies the way you interact with multiple Large Language Models (LLMs) by offering a streamlined, consistent interface. It abstracts the complexities of provider-specific APIs and configurations, making it easy to switch between models across different platforms.

🔑 Key Features

  • Unified API: Seamlessly interact with models from OpenAI, Gemini, Anthropic, Ollama, Groq, and more using a consistent interface.
  • Model Discovery: Effortlessly list and explore available models from each provider.
  • Multimodal Support: Work with text, image, and mixed-modality prompts where supported.
  • Built-in Evaluation Tools: Evaluate model performance with utilities for perplexity, lexical diversity, and more.

Installation

pip install -U modihub

Usage Examples

1. Listing Available Models

from modihub.llm import LLM

available_models = LLM.available_models()
for client, models in available_models.group_by("client"):
    print(f"{client}:")
    for model in models:
        print(f"  - {model.name}")

2. Text Generation

from modihub.llm import LLM
from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv()) # Loads API keys from .env file

# Replace with your desired model
llm = LLM.create("gpt-4o-mini")
# Generate text
response = llm("Tell me a joke about AI.")
print(response)

3. Multimodal Input (Image Description)

from PIL import Image
from modihub.llm import LLM
from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv())
# Replace with your desired model
llm = LLM.create("models/gemini-1.5-flash-8b")
# Load image
image = Image.open("image.png")  # Replace with the path to your image
text = "Describe the following image"
# create multimodal prompt
prompt = [text, image]
response = llm(prompt)
print(response)

4. Model Evaluation (Pointwise Metrics)

from dotenv import find_dotenv, load_dotenv
from modihub.metrics import Perplexity, LexicalDiversity
from modihub.eval import Evaluator

load_dotenv(find_dotenv())

prompts = [
    "What are LLMs?",
    "Explain AI", "What is the meaning of life?"]
models = [
    "gpt-4o-mini",
    "llama3.1:latest",
    "models/gemini-1.5-flash-latest"
]
metrics = [Perplexity(), LexicalDiversity()]

evaluator = Evaluator(models, metrics)
results = {prompt: evaluator.evaluate(prompt) for prompt in prompts}
for prompt, result in results.items():
    print(f"Prompt: {prompt}")
    for model, metrics in zip(models, result):
        print(f"{model}: {metrics}")
    print()

Configuration

  • API Keys: Set your API keys for each LLM provider as environment variables (e.g., OPENAI_API_KEY, GEMINI_API_KEY, ANTHROPIC_API_KEY, or GROQ_API_KEY). A .env file in your project directory is a good place to store these.
  • Support Clients: MODI currently supports OpenAI, Gemini, Anthropic, Ollama, and Groq models. You can add support for additional clients by implementing the LLMClient interface.
  • System Instructions: Use the system_instruction parameter when creating an LLM instance to provide context or instructions to the model. This is supported by all clients.

Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues.

License

MIT License (Include a LICENSE file in your repository containing the MIT License text.)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modihub-1.0.0.tar.gz (50.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modihub-1.0.0-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file modihub-1.0.0.tar.gz.

File metadata

  • Download URL: modihub-1.0.0.tar.gz
  • Upload date:
  • Size: 50.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for modihub-1.0.0.tar.gz
Algorithm Hash digest
SHA256 a2436c6c2da81059682a12565d5476579b646a950ca65d8444e42dc53e2b17b1
MD5 a3706a33fdaaa8559580346c42245b30
BLAKE2b-256 1e2de01ed4a78aa3962fcb3538dfee21effc098603fd23c22d2905ae619de859

See more details on using hashes here.

File details

Details for the file modihub-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: modihub-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 15.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for modihub-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 643d3284bbe273b595660e52267c7eb1ceac3cf5950bb2f21dea5c1ac6bf1535
MD5 6ed3c5460ea02dfe72067a3738baa240
BLAKE2b-256 29f3353de6415aacb87f3a8b385c9551dd75907676d9117b067c7494f1e06105

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page