Skip to main content

Track and check deprecation status of LLM provider models (OpenAI, Anthropic, etc.)

Project description

llm-model-deprecation

A small Python library to track and check deprecation status of LLM provider models (OpenAI, Anthropic, etc.). Use it to warn when your app uses deprecated or retired models and to get replacement suggestions.

Install

cd /path/to/llm-model-deprecation
pip install -e .

Optional: support for loading from URLs via requests (stdlib urllib works without it):

pip install -e ".[fetch]"

Quick usage

The library loads data from DEFAULT_DATA_URL (techdevsynergy/llm-deprecation-data); if that fails (e.g. offline), it falls back to the built-in registry in data.py. No config needed.

from llm_deprecation import DeprecationChecker, DeprecationStatus

checker = DeprecationChecker()  # DEFAULT_DATA_URL, then data.py fallback

# Check by model id (searches all providers)
checker.is_deprecated("gpt-3.5-turbo-0301")   # True
checker.is_retired("gpt-3.5-turbo-0301")     # True
checker.status("gpt-4")                       # DeprecationStatus.ACTIVE

# With provider for exact match
checker.get("claude-2.0", provider="anthropic")
# -> ModelInfo(provider='anthropic', model_id='claude-2.0', status=LEGACY, replacement='claude-3-sonnet or claude-3-opus', ...)

# List deprecated models
for m in checker.list_deprecated(provider="openai"):
    print(m.model_id, m.status.value, m.replacement)

Status values

  • active — Currently supported, no deprecation.
  • legacy — Still supported; prefer newer models.
  • deprecated — Will be retired; migrate before sunset date.
  • retired — No longer available.

Data source

Data is loaded in two steps only:

  1. DEFAULT_DATA_URLtechdevsynergy/llm-deprecation-data (llm_deprecation_data.json on main). Tried first.
  2. Built-indata.py in the library. Used when the URL is unreachable (e.g. offline).

To export the built-in registry to JSON (e.g. for reference):

from llm_deprecation.loader import export_builtin_to_json
export_builtin_to_json("config/llm-models.json")

JSON schema (each entry in the root array or under "models" / "deprecations"):

Field Required Description
provider yes e.g. openai, anthropic, gemini
model_id yes API model identifier
status yes active, legacy, deprecated, retired
deprecated_date no ISO date when deprecated
sunset_date no ISO date when retired/unavailable
replacement no Suggested replacement model
notes no Free text

See config/llm-deprecation-models.json.example for a minimal example.

Extending the registry in code

You can still add or override entries programmatically:

from datetime import date
from llm_deprecation import DeprecationChecker
from llm_deprecation.models import ModelInfo, DeprecationStatus

checker = DeprecationChecker()
checker.register(ModelInfo(
    provider="openai",
    model_id="gpt-4-old",
    status=DeprecationStatus.DEPRECATED,
    sunset_date=date(2026, 1, 1),
    replacement="gpt-4o",
))

Testing

Run the example (loads from DEFAULT_DATA_URL, then checks a few models):

cd /path/to/llm-model-deprecation
pip install -e .
python example_usage.py

Run the test suite:

pip install -e ".[dev]"
pytest tests/ -v

Tests cover: loading the registry (URL or built-in fallback), is_deprecated / status, list_deprecated, and register() overrides.

Provider deprecation docs

For official, up-to-date lists, see:

Publishing to PyPI

The package is not on PyPI until you run the build and upload from your own machine (or CI) where PyPI is reachable. The steps below must be run locally.

1. Create a PyPI account and API token

  • Register at pypi.org (and optionally test.pypi.org for testing).
  • Under your account, go to Account settings → API tokens and create a token (e.g. scope: entire account or just this project).

2. Install build tools

pip install build twine

3. Build the package

cd /path/to/llm-model-deprecation
python -m build

This creates dist/llm-model-deprecation-0.1.0.tar.gz and dist/llm_model_deprecation-0.1.0-py3-none-any.whl.

4. Upload to PyPI

twine upload dist/*

When prompted, use __token__ as the username and your API token as the password. Or set env vars and use the script (recommended — uses a dedicated venv so system packages like urllib3 are untouched):

export TWINE_USERNAME=__token__
export TWINE_PASSWORD=pypi-YourApiTokenHere
bash scripts/publish.sh

The script creates .venv-deploy/, installs build and twine there, then runs build and upload. No need to change your global Python packages. Or run manually: python3 -m build then twine upload dist/*.

5. Test first (optional)

To try the release on Test PyPI before production:

twine upload --repository testpypi dist/*
pip install --index-url https://test.pypi.org/simple/ llm-model-deprecation

6. For future releases

Bump version in pyproject.toml, then run python -m build and twine upload dist/* again. Do not re-upload the same version to PyPI.

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_model_deprecation-1.0.0.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_model_deprecation-1.0.0-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file llm_model_deprecation-1.0.0.tar.gz.

File metadata

  • Download URL: llm_model_deprecation-1.0.0.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for llm_model_deprecation-1.0.0.tar.gz
Algorithm Hash digest
SHA256 dad843b6e95bdaa79449f7e3c61be0dabd87d30a2607311a17d88fd904f4888d
MD5 d9da8646d258e44e0536e1ab547a854b
BLAKE2b-256 199c40d20da5a666c79f932abaca43861c28ec1504b84e8452bba431d394750f

See more details on using hashes here.

File details

Details for the file llm_model_deprecation-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_model_deprecation-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cb58b9831fff3957e44e0d9c7d898a6bc1309a8ba72b10832b1fccf1c49f232c
MD5 111e092da18d2613c182800caeaaaf4b
BLAKE2b-256 91f12f7ef9ee4c104651ffee7aacb694aaa1ec393a8ddf3fb4d8b06f26a5c776

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page