Machine-readable Django query profiling for LLMs, tests, and CI.
Project description
django-llm-optimizer
django-llm-optimizer is a Django-first profiling package that captures request and test query behavior, analyzes it for common performance issues, and emits structured diagnostics designed for LLMs, CI pipelines, and automated tooling.
It is intentionally not a browser UI or standalone server. The package is meant to drop into an existing Django app, collect query traces using Django-native hooks, and export machine-readable results that agents or build systems can consume.
Why this exists
Django teams increasingly want automated feedback about ORM behavior during requests and tests, not just a page for manual inspection. This package focuses on:
- structured query and request traces
- duplicate query detection
- likely N+1 detection
- heuristic optimization suggestions
- test and CI usage
- Celery and Temporal background task profiling
- JSON-first exports
By default, traces are written as timestamped JSON reports under .django_llm_optimizer/, which makes them easy for scripts, agents, and CI artifacts to consume directly.
How it differs from Django Silk
Django Silk does a strong job of capturing requests and queries for human inspection through a UI.
django-llm-optimizer focuses on a different workflow:
- no UI
- structured machine-readable diagnostics
- LLM and agent-friendly output
- CI and test profiling helpers
- lightweight library-style integration inside an existing Django app
This is not a critique of Silk. The tools are aimed at different primary consumers: Silk is human-inspection first, while django-llm-optimizer is automation-first.
Installation
python -m pip install django-llm-optimizer
For local development:
python -m pip install -e ".[dev]"
In INSTALLED_APPS:
INSTALLED_APPS = [
# ...
"django_llm_optimizer",
]
Add middleware if you want request profiling:
MIDDLEWARE = [
# ...
"django_llm_optimizer.middleware.QueryProfilingMiddleware",
]
Configuration
Configure the package with DJANGO_LLM_PROFILER:
DJANGO_LLM_PROFILER = {
"ENABLED": True,
"CAPTURE_REQUESTS": True,
"CAPTURE_TESTS": True,
"CAPTURE_CELERY": True,
"CAPTURE_TEMPORAL": True,
"INCLUDE_STACKTRACE": True,
"MAX_STACK_FRAMES": 12,
"REDACT_SQL_PARAMS": True,
"SLOW_QUERY_MS": 100.0,
"DUPLICATE_QUERY_MIN_REPETITIONS": 3,
"NPLUSONE_MIN_REPETITIONS": 5,
"STORAGE_BACKEND": "django_llm_optimizer.storage.file.FileStorage",
"REPORTS_PATH": ".django_llm_optimizer",
"IGNORE_PATH_PREFIXES": [],
"INCLUDE_PATH_PREFIXES": [],
}
REPORTS_PATH controls where timestamped JSON reports are written. EXPORT_JSON_PATH is still accepted as a legacy alias.
Minimal setup:
INSTALLED_APPS = [
# ...
"django_llm_optimizer",
]
MIDDLEWARE = [
# ...
"django_llm_optimizer.middleware.QueryProfilingMiddleware",
]
DJANGO_LLM_PROFILER = {
"ENABLED": True,
}
With that setup, request traces are captured automatically and written to .django_llm_optimizer/ by default.
Celery task capture is automatic when Celery is installed and CAPTURE_CELERY=True. Temporal capture is explicit via decorators because Temporal execution is typically defined in application code rather than Django middleware.
Usage
Profile a block
from django_llm_optimizer import get_last_trace, profile_block
with profile_block(name="homepage"):
response = client.get("/")
trace = get_last_trace()
print(trace.summary.to_dict())
Export a trace
from django_llm_optimizer import export_trace
export_trace(trace, ".django_llm_optimizer/homepage.json")
If you omit the path, the package writes a timestamped report into REPORTS_PATH automatically:
export_trace(trace)
Analyze queries directly
from django_llm_optimizer import analyze_queries
summary = analyze_queries(trace.queries)
Test usage
Context manager usage:
from django_llm_optimizer.testing.context import profile_block
with profile_block(name="list-view-test"):
response = self.client.get("/widgets/")
Decorator usage:
from django_llm_optimizer.testing.decorators import assert_max_queries, profile_test
@profile_test
def test_widget_list(client):
client.get("/widgets/")
@assert_max_queries(5)
def test_widget_detail(client):
client.get("/widgets/1/")
@profile_test stores the resulting trace on test_widget_list.last_trace, and @assert_max_queries(...) raises a helpful assertion if the budget is exceeded.
Celery and Temporal
Celery tasks:
from celery import shared_task
@shared_task
def sync_users():
...
If Celery is installed and CAPTURE_CELERY=True, django-llm-optimizer hooks Celery task execution automatically via signals.
You can also decorate tasks explicitly:
from django_llm_optimizer.integrations import profile_celery_task
@profile_celery_task
def sync_users():
...
Temporal activities and workflows:
from django_llm_optimizer.integrations import (
profile_temporal_activity,
profile_temporal_workflow,
)
@profile_temporal_activity
async def fetch_customer(customer_id: str):
...
@profile_temporal_workflow
async def customer_sync_workflow(customer_id: str):
...
Management commands
Summarize traces:
python manage.py llm_profile_request_summary
python manage.py llm_profile_request_summary --format=json
Clear traces:
python manage.py llm_profile_flush
CI
This repository includes GitHub Actions workflows for:
- CI on every push and pull request
- PyPI publishing on release
The CI workflow runs Ruff and pytest. The publish workflow builds distributions and uploads them to PyPI.
Publishing
PyPI publishing is handled by python-publish.yml using PyPI Trusted Publishing with GitHub OIDC.
- No PyPI API token is used.
- Only the publish job gets
id-token: write; the build job keeps minimal read-only permissions. - Publishing is triggered when a GitHub Release is published.
- Manual dispatch is also available from GitHub Actions.
- PyPI must be configured with a Trusted Publisher that exactly matches:
- GitHub owner:
Akamad007 - repository name:
django-llm-optimizer - workflow file path:
.github/workflows/python-publish.yml - environment name:
pypi
- GitHub owner:
Maintainer note:
- Create a GitHub Release to trigger publish.
- The workflow file path configured on PyPI must match exactly.
- Reusable GitHub workflows cannot currently be used as the trusted workflow for PyPI Trusted Publishing.
- Environment mismatch can cause publish failures.
- Stale package names or old PyPI URLs can also cause confusion; this project publishes as
django-llm-optimizerathttps://pypi.org/project/django-llm-optimizer/.
See PUBLISHING.md for the maintainer checklist.
Output shape
Each trace includes:
- request or block metadata
- captured queries
- normalized SQL
- fingerprints
- timing
- stack/callsite metadata
- grouped duplicates
- heuristic issues and suggestions
The JSON output is designed to be stable enough for automated post-processing rather than optimized for visual browsing.
Default report filenames look like:
.django_llm_optimizer/20260315T184512123456Z-request-abc123def456.json
For machine-readable aggregate diagnostics across many traces, you can use:
from django_llm_optimizer import get_performance_report
report = get_performance_report(limit=10)
The aggregate report includes:
slowest_endpointsslowest_background_operationsslowest_queries- per-endpoint average and max request duration
- per-endpoint average and max DB time
- per-query cumulative and max duration
- paths and callsites associated with slow query fingerprints
- per-task/workflow average and max duration
Current limitations
- SQL normalization is heuristic, not a full SQL parser
- N+1 detection is pattern-based and intentionally conservative
- request path include/ignore filtering is basic in this MVP
- file storage currently lists lightweight metadata when reloading traces
- raw SQL is still retained in captured events in this MVP, so use care before exposing reports outside trusted environments
Roadmap
- richer ORM-specific suggestion quality
- better callsite grouping and source filtering
- richer artifact exports for CI systems
- tighter Django test runner integration
- pluggable post-processing and custom analyzers
Development
Run tests:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file django_llm_optimizer-0.1.2.tar.gz.
File metadata
- Download URL: django_llm_optimizer-0.1.2.tar.gz
- Upload date:
- Size: 25.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f94d356ff0c9465d9f87c50c7c0811ef1ebcae9c1ff213fcb1ae7c0ae9af78f2
|
|
| MD5 |
6d35f462cdd2c060cbe2ec5463735451
|
|
| BLAKE2b-256 |
21853cb49e0de2fb03b811448d94dc7bca9e7d206afcbdb9885dbd4726e2963f
|
Provenance
The following attestation bundles were made for django_llm_optimizer-0.1.2.tar.gz:
Publisher:
python-publish.yml on Akamad007/django-llm-optimizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
django_llm_optimizer-0.1.2.tar.gz -
Subject digest:
f94d356ff0c9465d9f87c50c7c0811ef1ebcae9c1ff213fcb1ae7c0ae9af78f2 - Sigstore transparency entry: 1109327926
- Sigstore integration time:
-
Permalink:
Akamad007/django-llm-optimizer@f8526cc24ebe39ce670cbeb5b80e7af840aa5ccf -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Akamad007
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@f8526cc24ebe39ce670cbeb5b80e7af840aa5ccf -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file django_llm_optimizer-0.1.2-py3-none-any.whl.
File metadata
- Download URL: django_llm_optimizer-0.1.2-py3-none-any.whl
- Upload date:
- Size: 29.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07f6100fd8e5aee9c47293048909c83a8e998a2d81ba758f5c233a134317039e
|
|
| MD5 |
5c78369f376a9d98c6ce17c5495abee9
|
|
| BLAKE2b-256 |
cf9dd6bb0da4e27b0d9ac76ab265c42c2f836b1b23105d5532ea0d143d79d3ee
|
Provenance
The following attestation bundles were made for django_llm_optimizer-0.1.2-py3-none-any.whl:
Publisher:
python-publish.yml on Akamad007/django-llm-optimizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
django_llm_optimizer-0.1.2-py3-none-any.whl -
Subject digest:
07f6100fd8e5aee9c47293048909c83a8e998a2d81ba758f5c233a134317039e - Sigstore transparency entry: 1109327930
- Sigstore integration time:
-
Permalink:
Akamad007/django-llm-optimizer@f8526cc24ebe39ce670cbeb5b80e7af840aa5ccf -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Akamad007
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@f8526cc24ebe39ce670cbeb5b80e7af840aa5ccf -
Trigger Event:
workflow_dispatch
-
Statement type: