Skip to main content

A pytest plugin to enforce test timing constraints and size distributions.

Project description

pytest-test-categories logo

Pytest Test Categories Plugin

PyPI version Python versions CI Status Code Coverage Documentation Status Downloads License: MIT

Enforce Google's hermetic testing practices in Python.
Block network, filesystem, and subprocess access in unit tests. Validate your test pyramid. Eliminate flaky tests.

DocumentationPyPIQuickstartWhy?


Quickstart

# Install
pip install pytest-test-categories

# Mark your tests
# @pytest.mark.small  - Fast, hermetic (no I/O)
# @pytest.mark.medium - Can use localhost, filesystem
# @pytest.mark.large  - Full network access

# Enable enforcement in pyproject.toml
# test_categories_enforcement = "strict"

# Run pytest as usual
pytest

Table of Contents


Why pytest-test-categories?

The Problem

Flaky tests are a symptom. Hidden external dependencies are the disease.

Most test suites suffer from:

  • Flaky tests - Tests that pass locally but fail in CI due to network timeouts, race conditions, or shared state
  • Slow CI pipelines - No time budgets means tests grow unbounded
  • Inverted test pyramid - Too many slow integration tests, too few fast unit tests
  • No enforced boundaries - "Unit tests" that secretly hit the database, network, or filesystem

The root cause? Tests with hidden external dependencies that make them non-deterministic.

The Solution

pytest-test-categories brings Google's battle-tested testing philosophy (from "Software Engineering at Google") to Python:

What How
Categorize tests by size @pytest.mark.small, medium, large, xlarge
Enforce hermeticity Block network, filesystem, database, subprocess in small tests
Enforce time limits 1s for small, 5min for medium, 15min for large
Validate distribution Maintain healthy 80/15/5 test pyramid

When a small test tries to access the network, it fails immediately with actionable guidance:

======================================================================
[TC001] Network Violation
======================================================================
Category: SMALL

What happened:
  SMALL test attempted network connection to api.example.com:443

To fix this (choose one):
  • Mock the network call using responses, httpretty, or respx
  • Use dependency injection to provide a fake HTTP client
  • Change test category to @pytest.mark.medium
======================================================================

(back to top)


Test Size Categories

Resource Small Medium Large XLarge
Time Limit 1s 5min 15min 15min
Network ❌ Blocked Localhost ✓ Allowed ✓ Allowed
Filesystem ❌ Blocked ✓ Allowed ✓ Allowed ✓ Allowed
Database ❌ Blocked ✓ Allowed ✓ Allowed ✓ Allowed
Subprocess ❌ Blocked ✓ Allowed ✓ Allowed ✓ Allowed
Sleep ❌ Blocked ✓ Allowed ✓ Allowed ✓ Allowed

Small tests must be hermetic - completely isolated from external resources. This eliminates flakiness at the source.

(back to top)


Installation

pip

pip install pytest-test-categories

uv

uv add pytest-test-categories

Poetry

poetry add pytest-test-categories

(back to top)


Quick Start

Basic Usage

Mark your tests with size markers:

import pytest

@pytest.mark.small
def test_unit():
    """Fast, hermetic unit test - no network, no filesystem, no database."""
    assert 1 + 1 == 2

@pytest.mark.small
def test_with_mocking(mocker):
    """Mocked tests are still hermetic - mocks intercept before the network layer."""
    mocker.patch("requests.get").return_value.json.return_value = {"status": "ok"}
    # Your code that uses requests.get() works without hitting the network

@pytest.mark.medium
def test_integration(tmp_path):
    """Integration test - can access localhost and filesystem."""
    db_path = tmp_path / "test.db"
    # ... test with local database

Or inherit from base test classes:

from pytest_test_categories import SmallTest, MediumTest

class DescribeUserService(SmallTest):
    """All tests in this class are automatically marked as small."""

    def test_validates_email_format(self):
        assert is_valid_email("user@example.com")

    def test_rejects_invalid_email(self):
        assert not is_valid_email("not-an-email")

class DescribeUserRepository(MediumTest):
    """Tests requiring database access."""

    def test_saves_user(self, db_connection):
        # ... test with real database

Enable Enforcement

By default, enforcement is off. Enable it in pyproject.toml:

[tool.pytest.ini_options]
# Resource isolation: "off" (default), "warn", or "strict"
test_categories_enforcement = "strict"

# Distribution validation: "off" (default), "warn", or "strict"
test_categories_distribution_enforcement = "warn"

Run pytest as usual:

pytest

(back to top)


Configuration

pyproject.toml

[tool.pytest.ini_options]
# Enforcement modes: "strict" (fail), "warn" (warning), "off" (disabled)
test_categories_enforcement = "strict"
test_categories_distribution_enforcement = "warn"

Command-Line Options

CLI options override pyproject.toml settings:

# Enforcement modes
pytest --test-categories-enforcement=strict
pytest --test-categories-distribution-enforcement=warn

# Reporting
pytest --test-size-report=basic      # Summary report
pytest --test-size-report=detailed   # Per-test details
pytest --test-size-report=json       # JSON output
pytest --test-size-report=json --test-size-report-file=report.json

Note: Time limits are fixed per Google's testing standards and cannot be customized. This ensures consistent test categorization across all projects.

Performance Baselines

For performance-critical code paths, you can set custom timeout limits stricter than the category default:

import pytest

@pytest.mark.small(timeout=0.1)  # Must complete in 100ms instead of 1s
def test_critical_path():
    """This test must be fast - we're testing a hot path."""
    result = critical_computation()
    assert result == expected

@pytest.mark.medium(timeout=5.0)  # Must complete in 5s instead of 5min
def test_database_query():
    """Query performance is critical for user experience."""
    results = db.execute(query)
    assert len(results) > 0

When a test exceeds its custom baseline, you get a distinct error:

======================================================================
[TC008] Performance Baseline Violation
======================================================================
Test: tests/test_api.py::test_critical_path
Category: SMALL

What happened:
  Test exceeded custom performance baseline: 0.15s > 0.1s (baseline)
  (Category limit: 1.0s)

To fix this (choose one):
  - Optimize the code to meet the performance baseline
  - Increase the baseline if the current limit is too strict
  - Remove the baseline to use the default category limit
======================================================================

Note: The baseline must be less than or equal to the category's time limit.

(back to top)


Enforcement Modes

Mode Behavior Use Case
off No enforcement (default) Initial exploration
warn Emit warnings, tests continue Migration period
strict Fail tests on violations Production enforcement

Recommended Migration Path

# Week 1: Discovery - see what would fail
test_categories_enforcement = "off"

# Week 2-4: Migration - fix violations incrementally
test_categories_enforcement = "warn"

# Week 5+: Enforced - violations fail the build
test_categories_enforcement = "strict"

(back to top)


Philosophy: No Override Markers

pytest-test-categories intentionally does not provide per-test override markers like @pytest.mark.allow_network.

If a test needs network access, filesystem access, or other resources, it should be marked with the appropriate size:

# Wrong: Trying to bypass restrictions
@pytest.mark.small
@pytest.mark.allow_network  # This marker does not exist!
def test_api_call():
    ...

# Correct: Use the appropriate test size
@pytest.mark.medium  # Medium tests can access localhost
def test_api_call():
    ...

# Or: Mock the dependency for small tests
@pytest.mark.small
def test_api_call(httpx_mock):
    httpx_mock.add_response(url="https://api.example.com/", json={"status": "ok"})
    ...

Why no escape hatches?

  1. Flaky tests are expensive - escape hatches become the norm, defeating the purpose
  2. Categories have meaning - if a "small" test can access the network, it's not really a small test
  3. Encourages better design - mocking and dependency injection lead to more testable code

See the Design Philosophy documentation for the full rationale.

(back to top)


Test Distribution Targets

Size Target Percentage Tolerance
Small 80% ± 5%
Medium 15% ± 5%
Large/XLarge 5% ± 3%

When distribution enforcement is enabled, pytest will warn or fail if your test distribution falls outside these ranges.

(back to top)


Reporting

Terminal Reports

# Basic summary report
pytest --test-size-report=basic

# Detailed report with per-test information
pytest --test-size-report=detailed

JSON Report Export

For CI/CD integration and custom tooling:

# Output JSON report to terminal
pytest --test-size-report=json

# Write JSON report to a file
pytest --test-size-report=json --test-size-report-file=report.json
JSON Report Structure
{
  "timestamp": "2025-11-29T12:00:00.000000Z",
  "summary": {
    "total_tests": 150,
    "distribution": {
      "small": {"count": 120, "percentage": 80.0, "target": 80.0},
      "medium": {"count": 22, "percentage": 14.67, "target": 15.0},
      "large": {"count": 6, "percentage": 4.0, "target": 4.0},
      "xlarge": {"count": 2, "percentage": 1.33, "target": 1.0}
    },
    "violations": {
      "timing": 0,
      "baseline": 0,
      "hermeticity": {
        "network": 0,
        "filesystem": 0,
        "process": 0,
        "database": 0,
        "sleep": 0,
        "total": 0
      }
    }
  },
  "tests": [
    {
      "name": "tests/test_example.py::test_fast_function",
      "size": "small",
      "duration": 0.002,
      "status": "passed",
      "violations": []
    }
  ]
}

(back to top)


Documentation

For comprehensive documentation, visit pytest-test-categories.readthedocs.io:

Project Resources

(back to top)


Contributing

We welcome contributions! Please read our CONTRIBUTING.md for detailed guidelines.

Quick Start for Contributors

  1. Fork and clone the repository
  2. Create an issue describing what you plan to work on
  3. Create a feature branch from main
  4. Make your changes following our coding standards
  5. Run pre-commit hooks to ensure quality: uv run pre-commit run --all-files
  6. Open a pull request linking to your issue

(back to top)


License

This project is licensed under the MIT License.


Happy testing!
Built with the belief that flaky tests are a solved problem.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_test_categories-1.2.0.tar.gz (480.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytest_test_categories-1.2.0-py3-none-any.whl (136.4 kB view details)

Uploaded Python 3

File details

Details for the file pytest_test_categories-1.2.0.tar.gz.

File metadata

  • Download URL: pytest_test_categories-1.2.0.tar.gz
  • Upload date:
  • Size: 480.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pytest_test_categories-1.2.0.tar.gz
Algorithm Hash digest
SHA256 78a2b54c02a5e8b4ad5fd30f269ba9b41f40f2e8887441cb767dc93947107168
MD5 8fc91ed1d80f0c8d01218b4d0d141903
BLAKE2b-256 89e1e1dc448fc1558bd01984d81487001dd08f2d2bf1b9fd124bb17cdab61791

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_test_categories-1.2.0.tar.gz:

Publisher: release.yml on mikelane/pytest-test-categories

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pytest_test_categories-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_test_categories-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 399d088404318e4c191fec8a09a7fc1c8bac249f807ab8059a8fef7f8fedf91e
MD5 e07be6b68ef07c00f464d074cbda21ca
BLAKE2b-256 477f203a1cded93c7a37e31c0a0070f8c50931a363187ba982614445c327ddef

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_test_categories-1.2.0-py3-none-any.whl:

Publisher: release.yml on mikelane/pytest-test-categories

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page