A performance testing framework for Django that helps you understand and fix performance issues, not just detect them
Project description
Django Mercury Performance Testing
Simple, powerful performance monitoring for Django tests.
from django_mercury import monitor
with monitor(response_time=100) as result:
response = client.get('/api/users/')
# Automatic threshold checking - raises AssertionError on violations
The monitor either succeeds or fails:
============================================================
MERCURY PERFORMANCE REPORT
============================================================
๐งช Test: AuthEndpointPerformance.test_login_under_100ms
๐ Location: accounts/tests/mercury/test_auth_performance.py:20
๐ METRICS:
Response time: 568.43ms (threshold: 100.00ms)
Query count: 11 (threshold: 10)
โ
No N+1 patterns detected
โ FAILURES:
โฑ๏ธ Response time 568.43ms exceeded threshold 100ms (+468.43ms over)
๐ข Query count 11 exceeded threshold 10 (+1 extra queries)
============================================================
10 is the dafeult query count, but can be changed:
with monitor(response_time_ms=10, query_count=5) as result:
response = self.client.get('/api/v1/auth/me/')
result.explain() # print what the monitor found
If you aren't failing the mercury test, but you still want to see the stats monitored - use .explain()
============================================================
MERCURY PERFORMANCE REPORT
============================================================
๐งช Test: AuthEndpointPerformance.test_auth_me_under_50ms
๐ Location: accounts/tests/mercury/test_auth_performance.py:32
๐ METRICS:
Response time: 6.86ms (threshold: 10.00ms)
Query count: 3 (threshold: 5)
โ
No N+1 patterns detected
============================================================
No failure - but still useful information to help you understand your project, and tweak the performance thresholds.
Why Mercury?
Most performance tools just detect problems. Mercury explains them in your test output, with clear context and actionable fixes.
No configuration required. Works out of the box with sensible defaults. Customize when you need to.
Built for real Django projects. Detects N+1 queries, slow responses, and excessive database calls automatically.
Installation
pip install django-mercury-performance
Two Usage Modes
Minimal (context manager only):
# Just pip install - no setup needed
from django_mercury import monitor
with monitor() as result:
response = self.client.get('/api/users/')
Full features (management command, future admin, etc.):
# Add to settings.py
INSTALLED_APPS = [
...
'django_mercury', # Enables management commands
...
]
Then use the smart test discovery command:
# Only runs tests that use monitor()
python manage.py mercury_test
# Run specific app
python manage.py mercury_test myapp
# See what would run (dry run)
python manage.py mercury_test --verbosity=2
Quick Start
Basic Usage
from django_mercury import monitor
from django.test import TestCase
class UserAPITest(TestCase):
def test_user_list_performance(self):
"""Monitor performance with zero configuration."""
with monitor() as result:
response = self.client.get('/api/users/')
# If thresholds exceeded, AssertionError with full report is raised
# Otherwise, check metrics manually:
print(f"Response time: {result.response_time_ms:.2f}ms")
print(f"Queries: {result.query_count}")
Custom Thresholds
# Override defaults inline
with monitor(response_time_ms=50, query_count=5) as result:
response = self.client.get('/api/users/')
# Or configure per-file
MERCURY_PERFORMANCE_THRESHOLDS = {
'response_time_ms': 100,
'query_count': 10,
'n_plus_one_threshold': 8,
}
# Or in Django settings.py
MERCURY_PERFORMANCE_THRESHOLDS = {
'response_time_ms': 200,
'query_count': 20,
'n_plus_one_threshold': 10,
}
Configuration hierarchy: Inline > File-level > Django settings > Defaults
Detailed Reports
with monitor() as result:
response = self.client.get('/api/users/')
# Print full performance breakdown
result.explain()
Example output:
============================================================
MERCURY PERFORMANCE REPORT
============================================================
๐ METRICS:
Response time: 156.32ms (threshold: 100ms)
Query count: 45 (threshold: 10)
๐ N+1 PATTERNS DETECTED:
โ FAIL [23x] SELECT * FROM "auth_user" WHERE "id" = ?
โ SELECT * FROM "auth_user" WHERE "id" = 1
โ SELECT * FROM "auth_user" WHERE "id" = 2
โ SELECT * FROM "auth_user" WHERE "id" = 3
โ ๏ธ WARN [8x] SELECT * FROM "user_profile" WHERE "user_id" = ?
โ FAILURES:
โฑ๏ธ Response time 156.32ms exceeded threshold 100ms (+56.32ms over)
๐ข Query count 45 exceeded threshold 10 (+35 extra queries)
๐ N+1 pattern detected: 23 similar queries (threshold: 10)
Pattern: SELECT * FROM "auth_user" WHERE "id" = ?
============================================================
What Gets Monitored
Response Time
Measures end-to-end execution time using high-precision perf_counter().
Default threshold: 200ms
Query Count
Tracks all database queries executed during the monitored block using Django's CaptureQueriesContext.
Default threshold: 20 queries
N+1 Query Detection
Automatically normalizes SQL queries and detects repeated patterns:
-- These are detected as the same pattern:
SELECT * FROM users WHERE id = 1
SELECT * FROM users WHERE id = 2
SELECT * FROM users WHERE id = 999
-- Normalized to:
SELECT * FROM users WHERE id = ?
Detection levels:
- Failure: Count >= threshold (default: 10)
- Warning: Count >= 80% of threshold
- Notice: Count >= 50% of threshold (minimum 3)
Smart SQL Normalization
Handles:
- String literals:
'hello'โ? - Numbers:
123,45.67โ? - UUIDs:
'550e8400-e29b-41d4-a716-446655440000'โ? - IN clauses:
IN (1, 2, 3)โIN (?) - Boolean values:
TRUE,FALSEโ?
Configuration Options
MERCURY_PERFORMANCE_THRESHOLDS = {
# Response time in milliseconds
'response_time_ms': 200,
# Maximum number of queries
'query_count': 20,
# N+1 pattern failure threshold
'n_plus_one_threshold': 10,
}
Priority order (highest to lowest):
- Inline:
monitor(response_time_ms=100) - File-level:
MERCURY_PERFORMANCE_THRESHOLDSin test module - Django settings:
settings.MERCURY_PERFORMANCE_THRESHOLDS - Defaults: Built-in sensible values
Disabling Colors
Mercury uses ANSI colors for professional terminal output. To disable colors (useful for CI/CD logs):
# Standard NO_COLOR environment variable (https://no-color.org/)
NO_COLOR=1 python -m unittest tests/
# Or Mercury-specific
MERCURY_NO_COLOR=1 python manage.py test
# In GitHub Actions
env:
NO_COLOR: 1
When colors are disabled, you get clean plain text output perfect for log parsing.
Smart Test Discovery (Management Command)
Add 'django_mercury' to INSTALLED_APPS to unlock the management command:
# Auto-discovers and runs only tests using monitor()
python manage.py mercury_test
How it works:
- Scans project for
test_*.pyfiles - AST-parses to find
from django_mercury import monitor - Finds test methods with
with monitor()context manager - Runs only those tests (skips non-performance tests)
Example output:
Discovering Mercury performance tests...
Found 3 file(s) with 8 Mercury test(s):
โ accounts/tests/test_auth_performance.py (2 tests)
โ api/tests/test_user_endpoints.py (5 tests)
โ dashboard/tests/test_views.py (1 test)
Running 8 Mercury performance tests...
[... individual test reports ...]
================================================================================
MERCURY SUMMARY
================================================================================
Total tests monitored: 8
Passed: 7 (88%) Failed: 1 (12%)
Slowest tests:
1. test_user_list_with_joins - 567.25ms (11 queries)
2. test_dashboard_load - 234.12ms (45 queries, N+1)
3. test_search_autocomplete - 189.45ms (8 queries)
Top issues:
โข 1 test with N+1 patterns
โข 1 test exceeded response time threshold
Average metrics:
Response time: 145.32ms (median: 89.11ms)
Query count: 8.5 (median: 7)
To disable this summary: export MERCURY_NO_SUMMARY=1
================================================================================
Options:
# Filter by app
python manage.py mercury_test myapp
# Filter by test file
python manage.py mercury_test myapp.tests.test_api
# Preserve test database
python manage.py mercury_test --keepdb
# Skip smart discovery (run all tests)
python manage.py mercury_test --no-discover
# Adjust verbosity (0-3)
python manage.py mercury_test --verbosity=2
Benefits:
- Fast: Only runs performance tests, skips everything else
- Smart: AST-based detection finds actual
monitor()usage - Automatic: No manual test labels or file paths needed
- Summary: Shows aggregated stats and slowest tests at the end
End-of-Run Summary
Mercury automatically tracks all monitored tests and prints a summary on exit:
# Summary enabled by default
python manage.py test
# Disable summary
MERCURY_NO_SUMMARY=1 python manage.py test
The summary shows:
- Pass/fail counts and percentages
- Top 5 slowest tests
- Common issues (N+1 patterns, threshold violations)
- Average and median metrics
Note: Summary only appears when 1+ tests use monitor().
HTML Report Export
Generate beautiful, shareable HTML reports when using the management command:
# Auto-generate filename (mercury_report_TIMESTAMP.html)
python manage.py mercury_test --html
# Specify custom filename
python manage.py mercury_test --html performance_report.html
# Combine with other options
python manage.py mercury_test myapp.tests --html report.html --keepdb
HTML Report Features:
- ๐ Dashboard with test statistics (total, pass/fail %, averages)
- ๐ Slowest Tests section highlighting performance bottlenecks
- ๐ N+1 Patterns Aggregated across all tests - find systemic issues
- ๐ All Test Results with expandable details (click to see full metrics)
- ๐จ Color-coded pass/fail indicators for quick scanning
- ๐ฑ Responsive Design - works on desktop and mobile
- ๐ฆ Standalone File - no external dependencies, just open in browser
Example Report Sections:
โก Mercury Performance Test Summary
Generated: 2025-12-10 14:30:22
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Total Tests: 15 โ
โ Passed: 13 (87%) โโโโโโโโโโโโโโ 87% โ
โ Failed: 2 (13%) โโโโโโโโโโโโโโ 13% โ
โ Avg Response Time: 145.3ms โ
โ Avg Query Count: 8.5 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ Slowest Tests (Top 10)
โ test_user_list_with_joins - 567.25ms (11 queries, N+1)
โ test_dashboard_load - 234.12ms (45 queries)
โ test_search_autocomplete - 189.45ms (8 queries)
...
๐ N+1 Query Patterns (Aggregated)
Found 3 unique pattern(s) across all tests
โ 45x occurrences across 3 test(s)
SELECT * FROM "auth_user" WHERE "id" = ?
Affected tests:
โข TestUserAPI.test_list_users
โข TestDashboard.test_load
โข TestPerformance.test_bulk_fetch
๐ All Test Results (15 tests)
โบ โ TestUserAPI.test_login - 45ms, 3 queries
[Click to expand full details...]
โบ โ TestUserAPI.test_list - 567ms, 11 queries, N+1
[Click to expand full details...]
Sharing Reports:
- ๐ง Email to stakeholders or team members
- ๐ Attach to PRs/issues for performance reviews
- ๐ Archive for historical comparison and trends
- ๐ Open directly in any web browser (Chrome, Firefox, Safari)
Individual Test Export:
You can also export individual test results to HTML:
with monitor() as result:
response = self.client.get('/api/users/')
# Export single result
result.to_html('single_test_report.html')
Advanced Usage
Inspect Results Programmatically
with monitor() as result:
response = self.client.get('/api/users/')
# Access metrics
assert result.response_time_ms < 100
assert result.query_count <= 10
assert len(result.n_plus_one_patterns) == 0
# Export to JSON
metrics = result.to_dict()
Custom Assertions
from django_mercury import monitor
with monitor() as result:
response = self.client.get('/api/users/')
# Custom business logic
if result.query_count > 15 and len(result.n_plus_one_patterns) > 0:
result.explain()
raise AssertionError("Too many queries with N+1 patterns detected")
Disable Auto-Failures (Manual Checking)
# Catch the exception to prevent test failure
try:
with monitor() as result:
response = self.client.get('/api/users/')
except AssertionError as e:
# Full report is in the exception
print(e)
# Decide what to do...
Architecture
Mercury follows SOLID principles with clean separation of concerns:
Core Modules:
monitor.py- Context manager orchestrationconfig.py- 4-layer threshold resolutionn_plus_one.py- SQL normalization and pattern detection
Design Principles:
- Pure functions for easy testing
- Immutable dataclasses for results
- No side effects except Django query capture
- Type hints throughout
- Zero dependencies beyond Django
Real-World Example
from django_mercury import monitor
from django.test import TestCase
from myapp.models import User
class UserAPIPerformanceTest(TestCase):
def setUp(self):
# Create test data
User.objects.bulk_create([
User(username=f'user{i}') for i in range(100)
])
def test_user_list_without_optimization(self):
"""This will fail - demonstrates N+1 problem."""
with monitor(query_count=5) as result:
# Bad: N+1 queries (1 + 100 profile lookups)
users = User.objects.all()
for user in users:
_ = user.profile.bio # Triggers query per user
# AssertionError raised with N+1 pattern details
def test_user_list_with_optimization(self):
"""This passes - select_related prevents N+1."""
with monitor(query_count=5) as result:
# Good: 1 query with JOIN
users = User.objects.select_related('profile').all()
for user in users:
_ = user.profile.bio # No additional queries
# โ
Passes threshold checks
Testing Mercury Itself
Mercury has comprehensive test coverage:
# Run all tests
python -m unittest discover tests
# Run specific test module
python -m unittest tests.test_monitor
# With coverage
coverage run -m unittest discover tests
coverage report
Current test suite:
- 46 tests covering all core functionality
- Unit tests for pure functions
- Integration tests for Django components
- Edge case validation
Contributing
We welcome contributions! Mercury is designed for extensibility:
Project Structure
django_mercury/
โโโ __init__.py # Public API exports
โโโ monitor.py # Main context manager (400 lines)
โโโ config.py # Threshold resolution (78 lines)
โโโ n_plus_one.py # Pattern detection (96 lines)
tests/
โโโ test_monitor.py # Monitor tests (27 tests)
โโโ test_config.py # Config tests (5 tests)
โโโ test_n_plus_one.py # N+1 tests (9 tests)
Development Setup
# Clone repo
git clone https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing.git
cd Django-Mercury-Performance-Testing
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
python -m unittest discover tests
# Format code
black django_mercury tests --line-length 100
isort django_mercury tests --profile black
Code Standards
- Type hints required for all new code
- Pure functions preferred for testability
- Docstrings with examples for public APIs
- Tests for all new functionality
See CONTRIBUTING.md for detailed guidelines.
Philosophy
Mercury follows the 80-20 Human-in-the-Loop principle:
- 80% automation: Detect issues, measure metrics, normalize SQL
- 20% human control: Understand problems, make decisions, fix code
We believe:
- Tools should teach, not just detect
- Automation should preserve understanding
- Performance testing should be accessible to all skill levels
Part of the 80-20 Human-in-the-Loop ecosystem.
License
GNU General Public License v3.0 (GPL-3.0)
We chose GPL to ensure Mercury remains:
- Free - No cost barriers to learning
- Open - Transparent development and review
- Fair - Improvements benefit the entire community
See LICENSE for full text.
FAQ
Q: Do I need to configure anything? A: No. Mercury works with sensible defaults. Configure only when you need stricter/looser thresholds.
Q: Does it work with pytest? A: Yes. Mercury works with any test runner - it's just a context manager.
Q: What's the performance overhead?
A: Minimal. Django's CaptureQueriesContext is already optimized. SQL normalization adds ~1ms per 100 queries.
Q: Can I use this in production? A: Mercury is designed for tests, not production monitoring. Use Django Debug Toolbar or APM tools for production.
Q: Does it work with async views? A: Not yet. Async support is planned for v0.2.0.
Q: Can I customize the report format?
A: Yes. Use result.to_dict() and format however you want. Custom formatters can be contributed as plugins.
Roadmap
v0.1.0 (Current - MVP)
- โ Context manager monitoring
- โ N+1 query detection
- โ 4-layer configuration
- โ Comprehensive test suite
v0.2.0 (Next)
- ๐จ Async view support
- ๐จ Custom formatters API
- ๐จ Performance trend tracking
- ๐จ Memory profiling
v1.0.0 (Future)
- ๐ค CLI with test discovery
- ๐ค Educational mode with explanations
- ๐ค Plugin system for extensibility
- ๐ค MCP server for AI integration
Acknowledgments
- Django Community - For the incredible framework
- EduLite Project - Where Mercury was born
- 80-20 Human-in-the-Loop - For the guiding philosophy
- Contributors - Thank you for making Mercury better!
Django Mercury: Simple, powerful performance testing.
Because every Django developer deserves fast, understandable applications.
Get Started โข Documentation โข Contributing
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file django_mercury_performance-0.1.2.tar.gz.
File metadata
- Download URL: django_mercury_performance-0.1.2.tar.gz
- Upload date:
- Size: 57.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21d4fa8736f41e20a0e9a5f4016bcfeb283a33b5dd511635b555b6e8c2430ec1
|
|
| MD5 |
fb506944d6705d531f93212f11d6bf29
|
|
| BLAKE2b-256 |
07b1a03659d002ba64dc43bb6e639b7c8f8bb001e026b0ee1a0f82461964dae6
|
File details
Details for the file django_mercury_performance-0.1.2-py3-none-any.whl.
File metadata
- Download URL: django_mercury_performance-0.1.2-py3-none-any.whl
- Upload date:
- Size: 42.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4333577db91872705d1b075379cb43ea54f39d85a6ff71b29762643c217fa5e0
|
|
| MD5 |
53606a5146d8ffbb9521002e7d23f466
|
|
| BLAKE2b-256 |
427f290a888d104b5abb0600431c07ecf0b779b5a74ffdea9ab73668b7640d7f
|