Skip to main content

model-compare evaluates AI models side‑by‑side based on user tasks, rating accuracy, creativity, and efficiency to guide model choice.

Project description

Model Compare Package

PyPI version License: MIT Downloads LinkedIn

A new package that helps users compare and evaluate different AI language models by analyzing their performance and capabilities.

Overview

This package takes user-provided text input describing specific tasks or scenarios and returns a structured comparison of how different models, like Gemini 3 Flash and Claude Code, would handle those tasks. It focuses on providing objective, side-by-side evaluations based on criteria such as accuracy, creativity, and efficiency, helping users make informed decisions about which model to use for their specific needs.

Installation

pip install model_compare

Usage

from model_compare import model_compare

response = model_compare(
    user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
    api_key="your_llm7_api_key",
    llm=None  # optional, defaults to ChatLLM7
)

You can also pass your own BaseChatModel instance, e.g., to use a different LLM like OpenAI:

from langchain_openai import ChatOpenAI
from model_compare import model_compare

llm = ChatOpenAI()
response = model_compare(
    user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
    llm=llm
)

Or to use Anthropic:

from langchain_anthropic import ChatAnthropic
from model_compare import model_compare

llm = ChatAnthropic()
response = model_compare(
    user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
    llm=llm
)

Or to use Google Generative AI:

from langchain_google_genai import ChatGoogleGenerativeAI
from model_compare import model_compare

llm = ChatGoogleGenerativeAI()
response = model_compare(
    user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
    llm=llm
)

You can also pass your own API key via the environment variable LLM7_API_KEY or directly as an argument.

Rate Limits

The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you need higher rate limits, you can pass your own API key via environment variable LLM7_API_KEY or directly as an argument.

Getting Started

You can get a free API key by registering at https://token.llm7.io/

Contributing

Please submit issues and pull requests to https://github.com/chigwell/model-compare

Author

Eugene Evstafev (eugene.evstafev-plus@email.com)

License

This package is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_compare-2025.12.22083425.tar.gz (4.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_compare-2025.12.22083425-py3-none-any.whl (5.5 kB view details)

Uploaded Python 3

File details

Details for the file model_compare-2025.12.22083425.tar.gz.

File metadata

File hashes

Hashes for model_compare-2025.12.22083425.tar.gz
Algorithm Hash digest
SHA256 bcb1f614301d6ba537824d40184ecdfb86307e14066fdaa7af125ba444ecd580
MD5 c6bed389224ab49d08601f00a7dc3123
BLAKE2b-256 3b7d1b3b88aac860b1aa4ac37c18d113c252b5f9eef369e452ed39b4afc6bd7d

See more details on using hashes here.

File details

Details for the file model_compare-2025.12.22083425-py3-none-any.whl.

File metadata

File hashes

Hashes for model_compare-2025.12.22083425-py3-none-any.whl
Algorithm Hash digest
SHA256 7de08b3df58d4af35c7d0d416fcc15abb3b8d38a1cec870a7cf44cceb306729e
MD5 59cf3d5d574424646d0e075157ac67b4
BLAKE2b-256 67c509d60a4a6910041f114555650ec6d245d249d25fef2b883a06f07e560945

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page