Skip to main content

A Unified Interface for Diverse LLMs

Project description

MODIHUB: A Unified Interface for Diverse LLMs

MODIHUB simplifies the way you interact with multiple Large Language Models (LLMs) by offering a streamlined, consistent interface. It abstracts the complexities of provider-specific APIs and configurations, making it easy to switch between models across different platforms.

🔑 Key Features

  • Unified API: Seamlessly interact with models from OpenAI, Gemini, Anthropic, Ollama, Groq, and more using a consistent interface.
  • Model Discovery: Effortlessly list and explore available models from each provider.
  • Multimodal Support: Work with text, image, and mixed-modality prompts where supported.
  • Built-in Evaluation Tools: Evaluate model performance with utilities for perplexity, lexical diversity, and more.

Installation

pip install -U modihub

Usage Examples

1. Listing Available Models

from modihub.llm import LLM

available_models = LLM.available_models()
for client, models in available_models.group_by("client"):
    print(f"{client}:")
    for model in models:
        print(f"  - {model.name}")

2. Text Generation

from modihub.llm import LLM
from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv()) # Loads API keys from .env file

# Replace with your desired model
llm = LLM.create("gpt-4o-mini")
# Generate text
response = llm("Tell me a joke about AI.")
print(response)

3. Multimodal Input (Image Description)

from PIL import Image
from modihub.llm import LLM
from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv())
# Replace with your desired model
llm = LLM.create("models/gemini-1.5-flash-8b")
# Load image
image = Image.open("image.png")  # Replace with the path to your image
text = "Describe the following image"
# create multimodal prompt
prompt = [text, image]
response = llm(prompt)
print(response)

4. Model Evaluation (Pointwise Metrics)

from dotenv import find_dotenv, load_dotenv
from modihub.metrics import Perplexity, LexicalDiversity
from modihub.eval import Evaluator

load_dotenv(find_dotenv())

prompts = [
    "What are LLMs?",
    "Explain AI", "What is the meaning of life?"]
models = [
    "gpt-4o-mini",
    "llama3.1:latest",
    "models/gemini-1.5-flash-latest"
]
metrics = [Perplexity(), LexicalDiversity()]

evaluator = Evaluator(models, metrics)
results = {prompt: evaluator.evaluate(prompt) for prompt in prompts}
for prompt, result in results.items():
    print(f"Prompt: {prompt}")
    for model, metrics in zip(models, result):
        print(f"{model}: {metrics}")
    print()

Configuration

  • API Keys: Set your API keys for each LLM provider as environment variables (e.g., OPENAI_API_KEY, GEMINI_API_KEY, ANTHROPIC_API_KEY, or GROQ_API_KEY). A .env file in your project directory is a good place to store these.
  • Support Clients: MODI currently supports OpenAI, Gemini, Anthropic, Ollama, and Groq models. You can add support for additional clients by implementing the LLMClient interface.
  • System Instructions: Use the system_instruction parameter when creating an LLM instance to provide context or instructions to the model. This is supported by all clients.

Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues.

License

MIT License (Include a LICENSE file in your repository containing the MIT License text.)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modihub-1.0.2.tar.gz (50.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modihub-1.0.2-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file modihub-1.0.2.tar.gz.

File metadata

  • Download URL: modihub-1.0.2.tar.gz
  • Upload date:
  • Size: 50.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for modihub-1.0.2.tar.gz
Algorithm Hash digest
SHA256 6c00d2a1955b10213b7ea98b29249c817a48d7aa4ba8d4e3ba0ba7d1a1a42f8e
MD5 1236980a615a4b31401370750547e271
BLAKE2b-256 1a0644510017cb9022807dcf01a02a9afd01cc8396dfd91fecc9032a7e6338ea

See more details on using hashes here.

File details

Details for the file modihub-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: modihub-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 15.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.2

File hashes

Hashes for modihub-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 59cb9df8c55d6e25dcfdf764eacf33952e741bcdd105c1ef3b481fa74fafd4b9
MD5 ba451a79859dbefee92d6b758d1a299c
BLAKE2b-256 f4fa7a0113452cf08841e0962977f90769b3948dd87bdea92f483a7880a4cdca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page