A reusable library for managing LLM providers, authentication, and model selection.
Project description
ModelForge
A Python library for managing LLM providers, authentication, and model selection with seamless LangChain integration.
Installation
Recommended: Virtual Environment
# Create and activate virtual environment
python -m venv model-forge-env
source model-forge-env/bin/activate # On Windows: model-forge-env\Scripts\activate
# Install package
pip install model-forge-llm
# Verify installation
modelforge --help
Quick Install (System-wide)
pip install model-forge-llm
Quick Start
Option 1: GitHub Copilot via Device Authentication Flow
# Discover GitHub Copilot models
modelforge models list --provider github_copilot
# Set up GitHub Copilot with device authentication
modelforge auth login --provider github_copilot
# Select Claude 3.7 Sonnet via GitHub Copilot
modelforge config use --provider github_copilot --model claude-3.7-sonnet
# Test your setup
modelforge test --prompt "Write a Python function to reverse a string"
Option 2: OpenAI (API Key Required)
# Add OpenAI with your API key
modelforge auth login --provider openai --api-key YOUR_API_KEY
# Select GPT-4o-mini
modelforge config use --provider openai --model gpt-4o-mini
# Test your setup
modelforge test --prompt "Hello, world!"
Option 3: Local Ollama (No API Key Needed)
# Make sure Ollama is running locally
# Then add a local model
modelforge config add --provider ollama --model qwen3:1.7b
# Select the local model
modelforge config use --provider ollama --model qwen3:1.7b
# Test your setup
modelforge test --prompt "What is machine learning?"
Common Commands - Complete Lifecycle
# Installation & Setup
modelforge --help # Verify installation
modelforge config show # View current config
# Model Discovery & Selection
modelforge models list # List all available models
modelforge models search "claude" # Search models by name
modelforge models info --provider openai --model gpt-4o # Get model details
# Authentication Management
modelforge auth login --provider openai --api-key KEY # API key auth
modelforge auth login --provider github_copilot # Device flow auth
modelforge auth status # Check auth status
modelforge auth logout --provider openai # Remove credentials
# Configuration Management
modelforge config add --provider openai --model gpt-4o-mini --api-key KEY
modelforge config add --provider ollama --model qwen3:1.7b --local
modelforge config use --provider openai --model gpt-4o-mini
modelforge config remove --provider openai --model gpt-4o-mini
# Testing & Usage
modelforge test --prompt "Hello, how are you?" # Test current model
modelforge test --prompt "Explain quantum computing" --verbose # Debug mode
# Cache & Maintenance
modelforge models list --refresh # Force refresh from models.dev
Python API
Basic Usage
from modelforge.registry import ModelForgeRegistry
# Initialize registry
registry = ModelForgeRegistry()
# Get currently configured model
llm = registry.get_llm()
# Use directly with LangChain
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([("human", "{input}")])
chain = prompt | llm
response = chain.invoke({"input": "Tell me a joke"})
print(response)
Advanced Usage
from modelforge.registry import ModelForgeRegistry
# Initialize with debug logging
registry = ModelForgeRegistry(verbose=True)
# Get specific model by provider and name
llm = registry.get_llm(provider_name="openai", model_alias="gpt-4o-mini")
# Use with full LangChain features
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Create complex chains
prompt = ChatPromptTemplate.from_template("Explain {topic} in simple terms")
chain = prompt | llm | StrOutputParser()
# Use with streaming
for chunk in chain.stream({"topic": "quantum computing"}):
print(chunk, end="", flush=True)
# Batch processing
questions = [
"What is machine learning?",
"Explain neural networks",
"How does backpropagation work?"
]
responses = chain.batch([{"topic": q} for q in questions])
Configuration Management
from modelforge import config
# Get current model selection
current = config.get_current_model()
print(f"Current: {current.get('provider')}/{current.get('model')}")
# Check if models are configured
if not current:
print("No model selected. Configure with:")
print("modelforge config add --provider openai --model gpt-4o-mini")
Error Handling
from modelforge.registry import ModelForgeRegistry
from modelforge.exceptions import ConfigurationError, ProviderError
try:
registry = ModelForgeRegistry()
llm = registry.get_llm()
response = llm.invoke("Hello world")
except ConfigurationError as e:
print(f"Configuration issue: {e}")
print("Run: modelforge config add --provider PROVIDER --model MODEL")
except ProviderError as e:
print(f"Provider error: {e}")
print("Check: modelforge auth status")
Supported Providers
- OpenAI: GPT-4, GPT-4o, GPT-3.5-turbo
- Google: Gemini Pro, Gemini Flash
- Ollama: Local models (Llama, Qwen, Mistral)
- GitHub Copilot: Claude, GPT models via GitHub
Authentication
ModelForge supports multiple authentication methods:
- API Keys: Store securely in configuration
- Device Flow: Browser-based OAuth for GitHub Copilot
- No Auth: For local models like Ollama
# API Key authentication
modelforge auth login --provider openai --api-key YOUR_KEY
# Device flow (GitHub Copilot)
modelforge auth login --provider github_copilot
# Check auth status
modelforge auth status
Configuration
ModelForge uses a two-tier configuration system:
- Global:
~/.config/model-forge/config.json(user-wide) - Local:
./.model-forge/config.json(project-specific)
Local config takes precedence over global when both exist.
Model Discovery
# List all available models
modelforge models list
# Search models by name or capability
modelforge models search "gpt"
# Get detailed model info
modelforge models info --provider openai --model gpt-4o
Development Setup
For contributors and developers:
git clone https://github.com/smiao-icims/model-forge.git
cd model-forge
poetry install
poetry run pytest
Documentation
- Models.dev - Comprehensive model reference
- GitHub Issues - Support and bug reports
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_forge_llm-0.2.6.tar.gz.
File metadata
- Download URL: model_forge_llm-0.2.6.tar.gz
- Upload date:
- Size: 32.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df9c3d9fbe1041a3ad5768bb363fd0ad1c29b926844bbadba64b2c6aa16fcb55
|
|
| MD5 |
873b3187356a80a5fbbceecaa5c51be3
|
|
| BLAKE2b-256 |
f83d29f0f12fffaa8338004b3d7866c19985e1a5554a2ede9692c864b6f75910
|
File details
Details for the file model_forge_llm-0.2.6-py3-none-any.whl.
File metadata
- Download URL: model_forge_llm-0.2.6-py3-none-any.whl
- Upload date:
- Size: 27.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0d096ae0b4e0882cabbfba6b8bc8204d6a564b9be7913baa2d399d0b75e1390
|
|
| MD5 |
d0b96d3cce66cd5fc41abefec47fb407
|
|
| BLAKE2b-256 |
2c8098df6f74fa08296477192b16673f9cf75fc4bbe9e0c6df10717f1b849cf9
|