Skip to main content

A helper library providing a way to work with the lists of generation models, utility models, and any other related files required for the AI-Horde ecosystem.

Project description

Horde Model Reference

Haidra Logo

Build Status PyPI Version PyPI badge showing supported Python versions for horde-model-reference package in blue and white color scheme License: AGPL v3

Horde Model Reference is the authoritative source for AI model metadata in the AI-Horde ecosystem. It provides information about image generation models (Stable Diffusion, FLUX, etc.), text generation models (LLMs), and utility models (CLIP, ControlNet, upscalers, etc.) used by AI-Horde tools, workers and clients.

📋 Table of Contents

What is Horde Model Reference?

Horde Model Reference serves three purposes in the AI-Horde ecosystem:

  1. 📄 JSON Reference Files: Canonical model metadata (names, checksums, download URLs, capabilities) used by workers and clients
  2. 🐍 Python Library: Programmatic access to model references with automatic updates, validation, and conversion
  3. 🌐 FastAPI Service: REST API for fetching, managing, and distributing model references across the horde

Role in AI-Horde

The AI-Horde (github) is a free and open sourced, crowd-sourced distributed compute network for AI generations. Workers need to know which models are approved, where to download them, and what capabilities they offer. This package provides that authoritative information.

For more context on AI-Horde concepts (workers, kudos, jobs, etc.), see the AI-Horde Glossary.

Key Features

  • Validated Model Metadata: SHA256 checksums, download URLs, baselines, NSFW flags, capabilities
  • 🗃️ Multiple Categories: Image generation, text generation, CLIP, ControlNet, ESRGAN, GFPGAN, and more
  • 🌐 REST API: FastAPI service with OpenAPI documentation
  • 📦 Legacy Compatibility: Automatic conversion from legacy GitHub format to new standardized format
  • 🔒 Type-Safe: Strict mypy type checking with Pydantic models
  • 🐳 Docker Ready: Pre-built Docker images and docker-compose configurations

Quick Start

Choose your use case:

Use Case 1: Python Library (Most Common)

For AI-Horde workers, client applications, or scripts that need model metadata:

pip install horde-model-reference
from horde_model_reference import ModelReferenceManager, MODEL_REFERENCE_CATEGORY

# Automatically fetches from PRIMARY server or GitHub
manager = ModelReferenceManager()

# Get all image generation models (using enum - strings like "image_generation" also work)
image_models = manager.get_model_reference(MODEL_REFERENCE_CATEGORY.image_generation)

for model_name, model_data in image_models.items():
    print(f"{model_name}: {model_data.description}")

# Check if a specific model exists
if "stable_diffusion_xl" in image_models:
    model = image_models["stable_diffusion_xl"]
    print(f"Baseline: {model.baseline}")
    print(f"NSFW: {model.nsfw}")

Use Case 2: Direct JSON Access

For non-Python applications or manual inspection:

The JSON files are available directly from the PRIMARY server:

# Get all image generation models
curl https://aihorde.net/api/model_references/v2/image_generation

# Get specific model
curl https://aihorde.net/api/model_references/v2/image_generation/stable_diffusion_xl

# List all categories
curl https://aihorde.net/api/model_references/v2/model_categories

Or clone the legacy GitHub repositories:

Use Case 3: FastAPI Service

For deploying your own PRIMARY server (advanced):

See the comprehensive DEPLOYMENT.md guide for:

  • Docker single-worker deployment
  • Docker multi-worker deployment with Redis
  • Optional GitHub sync service (auto-sync to legacy repos)
  • Non-Docker deployment
  • Production recommendations

Quick start:

# Start PRIMARY server
docker-compose up -d

# Optional: Enable GitHub sync service
docker-compose --profile sync up -d

# Verify
curl http://localhost:19800/api/heartbeat

For GitHub sync setup, see the sync README.md.

Installation

From PyPI (Recommended)

pip install horde-model-reference

With Optional Dependencies

# For running the FastAPI service
pip install horde-model-reference[service]

Using uv (Faster)

uv add horde-model-reference

From Source

git clone https://github.com/Haidra-Org/horde-model-reference.git
cd horde-model-reference
uv sync --all-groups  # or: pip install -e .

Usage Examples

Fetching Model References

from horde_model_reference import ModelReferenceManager, MODEL_REFERENCE_CATEGORY

manager = ModelReferenceManager()

# Get all available categories
print(list(MODEL_REFERENCE_CATEGORY))
# ['image_generation', 'text_generation', 'clip', 'controlnet', ...]

# Get all models in a category (using enum - strings like "text_generation" also work)
text_models = manager.get_model_reference(MODEL_REFERENCE_CATEGORY.text_generation)
print(f"Found {len(text_models)} text models")

# Get a specific model directly
model = manager.get_model(MODEL_REFERENCE_CATEGORY.text_generation, "llama-3-70b")
print(f"Parameters: {model.parameters_count}")
print(f"Description: {model.description}")

# Get just the model names in a category
model_names = manager.get_model_names(MODEL_REFERENCE_CATEGORY.image_generation)
print(f"Available image models: {', '.join(model_names[:3])}...")

# Get all references across all categories
all_refs = manager.get_all_model_references()
for category, models in all_refs.items():
    print(f"{category}: {len(models)} models")

Checking Model Availability

from horde_model_reference import ModelReferenceManager, MODEL_REFERENCE_CATEGORY

manager = ModelReferenceManager()

def is_model_available(category: MODEL_REFERENCE_CATEGORY, model_name: str) -> bool:
    """Check if a model is in the model reference."""
    try:
        models = manager.get_model_reference(category)
        return model_name in models
    except Exception as e:
        print(f"Error checking model: {e}")
        return False

# Usage (enum recommended, but strings like "image_generation" also work)
if is_model_available(MODEL_REFERENCE_CATEGORY.image_generation, "stable_diffusion_xl"):
    print("SDXL is available!")

Using with AI-Horde Worker

from horde_model_reference import ModelReferenceManager, MODEL_REFERENCE_CATEGORY

# Worker initialization
manager = ModelReferenceManager()

# Get approved models for your worker (using enum - strings also work)
available_models = manager.get_model_reference(MODEL_REFERENCE_CATEGORY.image_generation)

# Filter by what your GPU can handle
worker_models = {
    name: model
    for name, model in available_models.items()
    if model.baseline in ["stable_diffusion_1", "stable_diffusion_xl"]
}

print(f"Worker can serve {len(worker_models)} models")

Accessing via REST API

If you're running the FastAPI service:

import requests

BASE_URL = "http://localhost:19800/api/model_references/v2"

# Get all image models
response = requests.get(f"{BASE_URL}/image_generation")
models = response.json()

# Get specific model
response = requests.get(f"{BASE_URL}/image_generation/stable_diffusion_xl")
model = response.json()
print(f"Model: {model['name']}")
print(f"Description: {model['description']}")

Documentation

  • 📖 Full Documentation: MkDocs Site
  • 🚀 Deployment Guide: DEPLOYMENT.md
  • 🔄 GitHub Sync (Docker): DOCKER_SYNC.md - Optional automated sync to legacy repos
  • 📝 Legacy CSV Conversion: docs/legacy_csv_conversion.md - Text generation CSV format details
  • 🔧 API Reference: Run service and visit http://localhost:19800/docs for interactive Swagger UI
  • 🤝 Contributing: .CONTRIBUTING.md
  • 🗂️ Project Structure:
    • src/horde_model_reference/ - Core library
    • src/horde_model_reference/service/ - FastAPI service
    • src/horde_model_reference/backends/ - Backend implementations
    • src/horde_model_reference/legacy/ - Legacy conversion tools
    • tests/ - Test suite

Contributing

We welcome contributions of all sizes! Before contributing:

  1. Read .CONTRIBUTING.md for setup and guidelines
  2. Check open issues or start a discussion
  3. Follow the Haidra Python Style Guide

Support & Community

License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

See LICENSE for the full text.

What This Means

  • Free to use for any purpose (personal, commercial, research)
  • Free to modify and distribute modifications
  • ⚠️ Must disclose source if you run a modified version as a network service
  • ⚠️ Must use same license for derivative works

For network service deployments, you must make your source code available to users. See GNU AGPL FAQ for details.

Acknowledgments

  • Haidra-Org: Core development team
  • db0: AI-Horde creator and lead maintainer
  • AI-Horde Community: Workers, contributors, and supporters

Related Projects

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

horde_model_reference-2.1.5.tar.gz (555.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

horde_model_reference-2.1.5-py3-none-any.whl (202.2 kB view details)

Uploaded Python 3

File details

Details for the file horde_model_reference-2.1.5.tar.gz.

File metadata

  • Download URL: horde_model_reference-2.1.5.tar.gz
  • Upload date:
  • Size: 555.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for horde_model_reference-2.1.5.tar.gz
Algorithm Hash digest
SHA256 61556b62bbb7678939960fb6b06af3692cf6643d788acf0ee213b1e560223d35
MD5 624e895dc4b90618a07a4d867036d5e3
BLAKE2b-256 4e370f480c0faa58257903a970804be9a712dfa269fed98f2ae023121b589552

See more details on using hashes here.

Provenance

The following attestation bundles were made for horde_model_reference-2.1.5.tar.gz:

Publisher: release.yml on Haidra-Org/horde-model-reference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file horde_model_reference-2.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for horde_model_reference-2.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 07340bd88665497eb655b387b21d2fa33f2d12dcd85253db8334b8d6d5876f1d
MD5 0ba52a4f033e1425e48c79762dc0fcf9
BLAKE2b-256 7fcb4e0e5203b327360ff136fc7fd0a385236a90616719fc156a7ae267069cb1

See more details on using hashes here.

Provenance

The following attestation bundles were made for horde_model_reference-2.1.5-py3-none-any.whl:

Publisher: release.yml on Haidra-Org/horde-model-reference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page