A Python toolbox for easy interaction with various LLMs and Vision models
Project description
Free LLM Toolbox 🚀
A Python package that provides easy-to-use utilities for working with various Language Models (LLMs) and Vision Models. 🎯 But everything is free ! (working on generous free plans of some AI platforms)
Features
- Text generation with multiple LLM providers support
- Image analysis and description capabilities
- Support for models like Llama, Groq, and Google's Gemini
- Streaming responses
- Tool integration support
- JSON output formatting
- Customizable system prompts
Installation 💻
uv pip install free-llm-toolbox
Configuration ⚙️
Before using the library, you need to configure your API keys in a .env file:
GROQ_API_KEY=your_groq_key
GITHUB_TOKEN=your_github_token
GOOGLE_API_KEY=your_google_key
SAMBANOVA_API_KEY=your_sambanova_key
CEREBRAS_API_KEY=your_cerebras_key
Quick Start
Text Generation
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="gemini-2.0-flash",
provider="google",
temperature=0.7
)
# Generate a response
response = session.answer("What is the capital of France?")
print(response)
Image Analysis
from free_llm_toolbox import ImageAnalyzerAgent
analyzer = ImageAnalyzerAgent()
description = analyzer.describe(
"path/to/image.jpg",
prompt="Describe the image",
vllm_provider="groq",
vllm_name="llama-3.2-90b-vision-preview"
)
print(description)
Usage 🎮
Text Models 📚
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="llama-3-70b",
provider="groq",
temperature=0.7,
top_k=45,
top_p=0.95
)
# Simple text generation
response = session.answer("What is the capital of France?")
# JSON-formatted response with Pydantic validation
from pydantic import BaseModel
class LocationInfo(BaseModel):
city: str
country: str
description: str
response = session.answer(
"What is the capital of France?",
json_formatting=True,
pydantic_object=LocationInfo
)
# Using custom tools
tools = [
{
"name": "weather",
"description": "Get current weather",
"function": get_weather
}
]
response, tool_calls = session.answer(
"What's the weather in Paris?",
tool_list=tools
)
# Streaming responses
for chunk in session.answer(
"Tell me a long story.",
stream=True
):
print(chunk, end="", flush=True)
Vision Models 👁️
from free_llm_toolbox import ImageAnalyzerAgent
# Initialize the agent
analyzer = ImageAnalyzerAgent()
# Analyze an image
description = analyzer.describe(
image_path="path/to/image.jpg",
prompt="Describe this image in detail",
vllm_provider="groq"
)
print(description)
Available Models 📊
Note: This list is not exhaustive. The library supports any new model ID released by these providers - you just need to get the correct model ID from your provider's documentation.
Text Models
| Provider | Model | LLM Provider ID | Model ID | Price | Rate Limit (per min) | Context Window | Speed |
|---|---|---|---|---|---|---|---|
| Gemini Pro Exp | gemini-2.0-pro-exp-02-05 | Free | 60 | 32,768 | Ultra Fast | ||
| Gemini Flash | gemini-2.0-flash | Free | 60 | 32,768 | Ultra Fast | ||
| Gemini Flash Thinking | gemini-2.0-flash-thinking-exp-01-21 | Free | 60 | 32,768 | Ultra Fast | ||
| Gemini Flash Lite | gemini-2.0-flash-lite-preview-02-05 | Free | 60 | 32,768 | Ultra Fast | ||
| GitHub | O3 Mini | github | o3-mini | Free | 50 | 8,192 | Fast |
| GitHub | GPT-4o | github | gpt-4o | Free | 50 | 8,192 | Fast |
| GitHub | GPT-4o Mini | github | gpt-4o-mini | Free | 50 | 8,192 | Fast |
| GitHub | O1 Mini | github | o1-mini | Free | 50 | 8,192 | Fast |
| GitHub | O1 Preview | github | o1-preview | Free | 50 | 8,192 | Fast |
| GitHub | Meta Llama 3.1 405B | github | meta-Llama-3.1-405B-Instruct | Free | 50 | 8,192 | Fast |
| GitHub | DeepSeek R1 | github | DeepSeek-R1 | Free | 50 | 8,192 | Fast |
| Groq | DeepSeek R1 Distill Llama 70B | groq | deepseek-r1-distill-llama-70b | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.3 70B Versatile | groq | llama-3.3-70b-versatile | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.1 8B Instant | groq | llama-3.1-8b-instant | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.2 3B Preview | groq | llama-3.2-3b-preview | Free | 100 | 131,072 | Ultra Fast |
| SambaNova | Llama3 405B | sambanova | llama3-405b | Free | 60 | 8,000 | Fast |
Vision Models
| Provider | Model | Vision Provider ID | Model ID | Price | Rate Limit (per min) | Speed |
|---|---|---|---|---|---|---|
| Gemini Vision Exp | gemini | gemini-exp-1206 | Free | 60 | Ultra Fast | |
| Gemini Vision Flash | gemini | gemini-2.0-flash | Free | 60 | Ultra Fast | |
| GitHub | GPT-4o Vision | github | gpt-4o | Free | 50 | Fast |
| GitHub | GPT-4o Mini Vision | github | gpt-4o-mini | Free | 50 | Fast |
Usage Example with Provider ID and Model ID
from free_llm_toolbox import LanguageModel
# Initialize a session with specific provider and model IDs
session = LanguageModel(
model_name="llama-3.3-70b-versatile", # Model ID from the table above
provider="groq", # Provider ID from the table above
temperature=0.7
)
Requirements
- Python 3.8 or higher
- Required dependencies will be automatically installed
Key Features ⭐
- Simple and intuitive session-based interface
- Support for both vision and text models
- Simple configuration with .env file
- Automatic context management
- Tool support for compatible models
- JSON output formatting with Pydantic validation
- Response streaming support
- Smart caching system
- CPU and GPU support
Contributing 🤝
Contributions are welcome! Feel free to:
- Fork the project
- Create your feature branch
- Commit your changes
- Push to the branch
- Open a Pull Request
License 📄
This project is licensed under the MIT License. See the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file free_llm_toolbox-0.1.7.tar.gz.
File metadata
- Download URL: free_llm_toolbox-0.1.7.tar.gz
- Upload date:
- Size: 11.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
63d9db92f10af2fea684507615c375acc6776369ae15a2f22b8ce2081d63081a
|
|
| MD5 |
2ba68f196b3d299f4e7bac3cd02ccfac
|
|
| BLAKE2b-256 |
c23652b5cef51bac71ccea630bae5a302e9728f74aaf28afc9001a85df443b6f
|
File details
Details for the file free_llm_toolbox-0.1.7-py3-none-any.whl.
File metadata
- Download URL: free_llm_toolbox-0.1.7-py3-none-any.whl
- Upload date:
- Size: 12.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7cf6cd255ac2265f619c09e8daf03db1ad7dcb27e99457dad53b7315a028d44
|
|
| MD5 |
28fef9974ffbff80ccf28b69a1d1bf4b
|
|
| BLAKE2b-256 |
fcf40a675d45eb11c4f04acabf8b8ba1082b4e2ff8f65a5575c7d0ff61302064
|