Skip to main content

A performance testing framework for Django that helps you understand and fix performance issues, not just detect them

Project description

Django Mercury Performance Testing

PyPI version Python 3.10+ Django 3.2-5.1 License: GPL v3

Simple, powerful performance monitoring for Django tests.

from django_mercury import monitor

with monitor(response_time=100) as result:
    response = client.get('/api/users/')
# Automatic threshold checking - raises AssertionError on violations

The monitor either succeeds or fails:

============================================================
MERCURY PERFORMANCE REPORT
============================================================

๐Ÿงช Test: AuthEndpointPerformance.test_login_under_100ms
๐Ÿ“ Location: accounts/tests/mercury/test_auth_performance.py:20

๐Ÿ“Š METRICS:
   Response time: 568.43ms (threshold: 100.00ms)
   Query count:   11 (threshold: 10)

โœ… No N+1 patterns detected

โŒ FAILURES:
   โฑ๏ธ  Response time 568.43ms exceeded threshold 100ms (+468.43ms over)
   ๐Ÿ”ข Query count 11 exceeded threshold 10 (+1 extra queries)

============================================================

10 is the dafeult query count, but can be changed:

with monitor(response_time_ms=10, query_count=5) as result:
            response = self.client.get('/api/v1/auth/me/')
result.explain() # print what the monitor found

If you aren't failing the mercury test, but you still want to see the stats monitored - use .explain()

============================================================
MERCURY PERFORMANCE REPORT
============================================================

๐Ÿงช Test: AuthEndpointPerformance.test_auth_me_under_50ms
๐Ÿ“ Location: accounts/tests/mercury/test_auth_performance.py:32

๐Ÿ“Š METRICS:
   Response time: 6.86ms (threshold: 10.00ms)
   Query count:   3 (threshold: 5)

โœ… No N+1 patterns detected

============================================================

No failure - but still useful information to help you understand your project, and tweak the performance thresholds.

Why Mercury?

Most performance tools just detect problems. Mercury explains them in your test output, with clear context and actionable fixes.

No configuration required. Works out of the box with sensible defaults. Customize when you need to.

Built for real Django projects. Detects N+1 queries, slow responses, and excessive database calls automatically.

Installation

pip install django-mercury-performance

Quick Start

Basic Usage

from django_mercury import monitor
from django.test import TestCase

class UserAPITest(TestCase):
    def test_user_list_performance(self):
        """Monitor performance with zero configuration."""
        with monitor() as result:
            response = self.client.get('/api/users/')

        # If thresholds exceeded, AssertionError with full report is raised
        # Otherwise, check metrics manually:
        print(f"Response time: {result.response_time_ms:.2f}ms")
        print(f"Queries: {result.query_count}")

Custom Thresholds

# Override defaults inline
with monitor(response_time_ms=50, query_count=5) as result:
    response = self.client.get('/api/users/')

# Or configure per-file
MERCURY_PERFORMANCE_THRESHOLDS = {
    'response_time_ms': 100,
    'query_count': 10,
    'n_plus_one_threshold': 8,
}

# Or in Django settings.py
MERCURY_PERFORMANCE_THRESHOLDS = {
    'response_time_ms': 200,
    'query_count': 20,
    'n_plus_one_threshold': 10,
}

Configuration hierarchy: Inline > File-level > Django settings > Defaults

Detailed Reports

with monitor() as result:
    response = self.client.get('/api/users/')

# Print full performance breakdown
result.explain()

Example output:

============================================================
MERCURY PERFORMANCE REPORT
============================================================

๐Ÿ“Š METRICS:
   Response time: 156.32ms (threshold: 100ms)
   Query count:   45 (threshold: 10)

๐Ÿ”„ N+1 PATTERNS DETECTED:
   โŒ FAIL [23x] SELECT * FROM "auth_user" WHERE "id" = ?
        โ†’ SELECT * FROM "auth_user" WHERE "id" = 1
        โ†’ SELECT * FROM "auth_user" WHERE "id" = 2
        โ†’ SELECT * FROM "auth_user" WHERE "id" = 3

   โš ๏ธ  WARN [8x] SELECT * FROM "user_profile" WHERE "user_id" = ?

โŒ FAILURES:
   โฑ๏ธ  Response time 156.32ms exceeded threshold 100ms (+56.32ms over)
   ๐Ÿ”ข Query count 45 exceeded threshold 10 (+35 extra queries)
   ๐Ÿ”„ N+1 pattern detected: 23 similar queries (threshold: 10)
      Pattern: SELECT * FROM "auth_user" WHERE "id" = ?

============================================================

What Gets Monitored

Response Time

Measures end-to-end execution time using high-precision perf_counter().

Default threshold: 200ms

Query Count

Tracks all database queries executed during the monitored block using Django's CaptureQueriesContext.

Default threshold: 20 queries

N+1 Query Detection

Automatically normalizes SQL queries and detects repeated patterns:

-- These are detected as the same pattern:
SELECT * FROM users WHERE id = 1
SELECT * FROM users WHERE id = 2
SELECT * FROM users WHERE id = 999

-- Normalized to:
SELECT * FROM users WHERE id = ?

Detection levels:

  • Failure: Count >= threshold (default: 10)
  • Warning: Count >= 80% of threshold
  • Notice: Count >= 50% of threshold (minimum 3)

Smart SQL Normalization

Handles:

  • String literals: 'hello' โ†’ ?
  • Numbers: 123, 45.67 โ†’ ?
  • UUIDs: '550e8400-e29b-41d4-a716-446655440000' โ†’ ?
  • IN clauses: IN (1, 2, 3) โ†’ IN (?)
  • Boolean values: TRUE, FALSE โ†’ ?

Configuration Options

MERCURY_PERFORMANCE_THRESHOLDS = {
    # Response time in milliseconds
    'response_time_ms': 200,

    # Maximum number of queries
    'query_count': 20,

    # N+1 pattern failure threshold
    'n_plus_one_threshold': 10,
}

Priority order (highest to lowest):

  1. Inline: monitor(response_time_ms=100)
  2. File-level: MERCURY_PERFORMANCE_THRESHOLDS in test module
  3. Django settings: settings.MERCURY_PERFORMANCE_THRESHOLDS
  4. Defaults: Built-in sensible values

Disabling Colors

Mercury uses ANSI colors for professional terminal output. To disable colors (useful for CI/CD logs):

# Standard NO_COLOR environment variable (https://no-color.org/)
NO_COLOR=1 python -m unittest tests/

# Or Mercury-specific
MERCURY_NO_COLOR=1 python manage.py test

# In GitHub Actions
env:
  NO_COLOR: 1

When colors are disabled, you get clean plain text output perfect for log parsing.

Advanced Usage

Inspect Results Programmatically

with monitor() as result:
    response = self.client.get('/api/users/')

# Access metrics
assert result.response_time_ms < 100
assert result.query_count <= 10
assert len(result.n_plus_one_patterns) == 0

# Export to JSON
metrics = result.to_dict()

Custom Assertions

from django_mercury import monitor

with monitor() as result:
    response = self.client.get('/api/users/')

# Custom business logic
if result.query_count > 15 and len(result.n_plus_one_patterns) > 0:
    result.explain()
    raise AssertionError("Too many queries with N+1 patterns detected")

Disable Auto-Failures (Manual Checking)

# Catch the exception to prevent test failure
try:
    with monitor() as result:
        response = self.client.get('/api/users/')
except AssertionError as e:
    # Full report is in the exception
    print(e)
    # Decide what to do...

Architecture

Mercury follows SOLID principles with clean separation of concerns:

Core Modules:

  • monitor.py - Context manager orchestration
  • config.py - 4-layer threshold resolution
  • n_plus_one.py - SQL normalization and pattern detection

Design Principles:

  • Pure functions for easy testing
  • Immutable dataclasses for results
  • No side effects except Django query capture
  • Type hints throughout
  • Zero dependencies beyond Django

Real-World Example

from django_mercury import monitor
from django.test import TestCase
from myapp.models import User

class UserAPIPerformanceTest(TestCase):
    def setUp(self):
        # Create test data
        User.objects.bulk_create([
            User(username=f'user{i}') for i in range(100)
        ])

    def test_user_list_without_optimization(self):
        """This will fail - demonstrates N+1 problem."""
        with monitor(query_count=5) as result:
            # Bad: N+1 queries (1 + 100 profile lookups)
            users = User.objects.all()
            for user in users:
                _ = user.profile.bio  # Triggers query per user

        # AssertionError raised with N+1 pattern details

    def test_user_list_with_optimization(self):
        """This passes - select_related prevents N+1."""
        with monitor(query_count=5) as result:
            # Good: 1 query with JOIN
            users = User.objects.select_related('profile').all()
            for user in users:
                _ = user.profile.bio  # No additional queries

        # โœ… Passes threshold checks

Testing Mercury Itself

Mercury has comprehensive test coverage:

# Run all tests
python -m unittest discover tests

# Run specific test module
python -m unittest tests.test_monitor

# With coverage
coverage run -m unittest discover tests
coverage report

Current test suite:

  • 46 tests covering all core functionality
  • Unit tests for pure functions
  • Integration tests for Django components
  • Edge case validation

Contributing

We welcome contributions! Mercury is designed for extensibility:

Project Structure

django_mercury/
โ”œโ”€โ”€ __init__.py          # Public API exports
โ”œโ”€โ”€ monitor.py           # Main context manager (400 lines)
โ”œโ”€โ”€ config.py            # Threshold resolution (78 lines)
โ””โ”€โ”€ n_plus_one.py        # Pattern detection (96 lines)

tests/
โ”œโ”€โ”€ test_monitor.py      # Monitor tests (27 tests)
โ”œโ”€โ”€ test_config.py       # Config tests (5 tests)
โ””โ”€โ”€ test_n_plus_one.py   # N+1 tests (9 tests)

Development Setup

# Clone repo
git clone https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing.git
cd Django-Mercury-Performance-Testing

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
python -m unittest discover tests

# Format code
black django_mercury tests --line-length 100
isort django_mercury tests --profile black

Code Standards

  • Type hints required for all new code
  • Pure functions preferred for testability
  • Docstrings with examples for public APIs
  • Tests for all new functionality

See CONTRIBUTING.md for detailed guidelines.

Philosophy

Mercury follows the 80-20 Human-in-the-Loop principle:

  • 80% automation: Detect issues, measure metrics, normalize SQL
  • 20% human control: Understand problems, make decisions, fix code

We believe:

  • Tools should teach, not just detect
  • Automation should preserve understanding
  • Performance testing should be accessible to all skill levels

Part of the 80-20 Human-in-the-Loop ecosystem.

License

GNU General Public License v3.0 (GPL-3.0)

We chose GPL to ensure Mercury remains:

  • Free - No cost barriers to learning
  • Open - Transparent development and review
  • Fair - Improvements benefit the entire community

See LICENSE for full text.

FAQ

Q: Do I need to configure anything? A: No. Mercury works with sensible defaults. Configure only when you need stricter/looser thresholds.

Q: Does it work with pytest? A: Yes. Mercury works with any test runner - it's just a context manager.

Q: What's the performance overhead? A: Minimal. Django's CaptureQueriesContext is already optimized. SQL normalization adds ~1ms per 100 queries.

Q: Can I use this in production? A: Mercury is designed for tests, not production monitoring. Use Django Debug Toolbar or APM tools for production.

Q: Does it work with async views? A: Not yet. Async support is planned for v0.2.0.

Q: Can I customize the report format? A: Yes. Use result.to_dict() and format however you want. Custom formatters can be contributed as plugins.

Roadmap

v0.1.0 (Current - MVP)

  • โœ… Context manager monitoring
  • โœ… N+1 query detection
  • โœ… 4-layer configuration
  • โœ… Comprehensive test suite

v0.2.0 (Next)

  • ๐Ÿ”จ Async view support
  • ๐Ÿ”จ Custom formatters API
  • ๐Ÿ”จ Performance trend tracking
  • ๐Ÿ”จ Memory profiling

v1.0.0 (Future)

  • ๐Ÿค– CLI with test discovery
  • ๐Ÿค– Educational mode with explanations
  • ๐Ÿค– Plugin system for extensibility
  • ๐Ÿค– MCP server for AI integration

Acknowledgments

  • Django Community - For the incredible framework
  • EduLite Project - Where Mercury was born
  • 80-20 Human-in-the-Loop - For the guiding philosophy
  • Contributors - Thank you for making Mercury better!

Django Mercury: Simple, powerful performance testing.

Because every Django developer deserves fast, understandable applications.

Get Started โ€ข Documentation โ€ข Contributing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

django_mercury_performance-0.1.1b3.tar.gz (42.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

django_mercury_performance-0.1.1b3-py3-none-any.whl (29.4 kB view details)

Uploaded Python 3

File details

Details for the file django_mercury_performance-0.1.1b3.tar.gz.

File metadata

File hashes

Hashes for django_mercury_performance-0.1.1b3.tar.gz
Algorithm Hash digest
SHA256 a98c0ef8e2260293ee6b191b0bad107fc696b03f2444d6bedc81c30156e2dde8
MD5 a6413b975061fee4ca351974b195d2bc
BLAKE2b-256 94bbbe20fd5df3da5931e9a7bfde46982173c9d2f1d512ab45a0805a6bea6682

See more details on using hashes here.

File details

Details for the file django_mercury_performance-0.1.1b3-py3-none-any.whl.

File metadata

File hashes

Hashes for django_mercury_performance-0.1.1b3-py3-none-any.whl
Algorithm Hash digest
SHA256 bcf4ec5a741005073f696c688a80b9d083658c0bf60333404a69227ef81d9c37
MD5 08b145e43e691be7bb3db88df00be73f
BLAKE2b-256 fa37a6fb0f2d3e70dd0efd0f16d52c9c5e51f065838e2bcaea92515226ada751

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page