model-compare evaluates AI models side‑by‑side based on user tasks, rating accuracy, creativity, and efficiency to guide model choice.
Project description
Model Compare Package
A new package that helps users compare and evaluate different AI language models by analyzing their performance and capabilities.
Overview
This package takes user-provided text input describing specific tasks or scenarios and returns a structured comparison of how different models, like Gemini 3 Flash and Claude Code, would handle those tasks. It focuses on providing objective, side-by-side evaluations based on criteria such as accuracy, creativity, and efficiency, helping users make informed decisions about which model to use for their specific needs.
Installation
pip install model_compare
Usage
from model_compare import model_compare
response = model_compare(
user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
api_key="your_llm7_api_key",
llm=None # optional, defaults to ChatLLM7
)
You can also pass your own BaseChatModel instance, e.g., to use a different LLM like OpenAI:
from langchain_openai import ChatOpenAI
from model_compare import model_compare
llm = ChatOpenAI()
response = model_compare(
user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
llm=llm
)
Or to use Anthropic:
from langchain_anthropic import ChatAnthropic
from model_compare import model_compare
llm = ChatAnthropic()
response = model_compare(
user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
llm=llm
)
Or to use Google Generative AI:
from langchain_google_genai import ChatGoogleGenerativeAI
from model_compare import model_compare
llm = ChatGoogleGenerativeAI()
response = model_compare(
user_input="Compare Gemini 3 Flash and Claude Code on a text-to-image generation task",
llm=llm
)
You can also pass your own API key via the environment variable LLM7_API_KEY or directly as an argument.
Rate Limits
The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you need higher rate limits, you can pass your own API key via environment variable LLM7_API_KEY or directly as an argument.
Getting Started
You can get a free API key by registering at https://token.llm7.io/
Contributing
Please submit issues and pull requests to https://github.com/chigwell/model-compare
Author
Eugene Evstafev (eugene.evstafev-plus@email.com)
License
This package is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_compare-2025.12.22083425.tar.gz.
File metadata
- Download URL: model_compare-2025.12.22083425.tar.gz
- Upload date:
- Size: 4.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bcb1f614301d6ba537824d40184ecdfb86307e14066fdaa7af125ba444ecd580
|
|
| MD5 |
c6bed389224ab49d08601f00a7dc3123
|
|
| BLAKE2b-256 |
3b7d1b3b88aac860b1aa4ac37c18d113c252b5f9eef369e452ed39b4afc6bd7d
|
File details
Details for the file model_compare-2025.12.22083425-py3-none-any.whl.
File metadata
- Download URL: model_compare-2025.12.22083425-py3-none-any.whl
- Upload date:
- Size: 5.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7de08b3df58d4af35c7d0d416fcc15abb3b8d38a1cec870a7cf44cceb306729e
|
|
| MD5 |
59cf3d5d574424646d0e075157ac67b4
|
|
| BLAKE2b-256 |
67c509d60a4a6910041f114555650ec6d245d249d25fef2b883a06f07e560945
|