Announcing pytest-test-categories v1.1.0: Bring Google Testing Philosophy to Python
Source: Dev.to
The Problem With Most Test Suites
Be honest – how many of these apply to your codebase?
- ⏱️ Slow CI pipelines because tests have no time budgets
- 🎲 Flaky tests from network timeouts, race conditions, or shared state
- 🔺 Inverted test pyramid with too many slow integration tests
- 🚫 No enforced boundaries between unit, integration, and system tests
If you nodded at any of these, you’re not alone. These are the most common testing anti‑patterns in Python projects.
Introducing pytest-test-categories v1.1.0
A pytest plugin that brings Google’s battle‑tested testing philosophy (from Software Engineering at Google) to Python.
pip install pytest-test-categories
1. Categorize tests by size with clear resource constraints
import pytest
import requests
@pytest.mark.small
def test_pure_function():
"""Must complete in 0"""
@pytest.mark.large
def test_external_api():
"""Full network access, up to 15 minutes"""
response = requests.get("https://api.example.com")
assert response.ok
@pytest.mark.xlarge
def test_extended_e2e():
"""Full access, up to 15 minutes, for extensive E2E tests"""
# Long‑running end‑to‑end workflow
pass
2. Enforce hermeticity
======================================================================
[TC001] Network Violation
======================================================================
Category: SMALL
What happened:
SMALL test attempted network connection to api.example.com:443
To fix this (choose one):
• Mock the network call using responses, httpretty, or respx
• Use dependency injection to provide a fake HTTP client
• Change test category to @pytest.mark.medium
======================================================================
3. Validate your test pyramid
======================== Test Size Distribution ========================
Small: 120 tests (80.0%) - Target: 80% ✓
Medium: 22 tests (14.7%) - Target: 15% ✓
Large: 8 tests ( 5.3%) - Target: 5% ✓
========================================================================
No “allow network” escape hatch
The plugin deliberately lacks a @pytest.mark.allow_network marker. Adding such a marker would defeat the purpose:
# This defeats the entire purpose
@pytest.mark.small
@pytest.mark.allow_network # ❌ This marker doesn't exist!
def test_api():
requests.get("https://api.example.com") # Still flaky!
Instead, use the appropriate category:
@pytest.mark.medium # ✓ Honest about what the test does
def test_api():
requests.get("https://api.example.com")
Resource matrix by test size
| Resource | Small | Medium | Large | XLarge |
|---|---|---|---|---|
| Time | 1 s | 5 min | 15 min | 15 min |
| Network | ❌ Blocked | Localhost ✓ | ✓ Allowed | ✓ Allowed |
| Filesystem | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Database | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Subprocess | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Sleep | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
Mocking in small tests
Small tests can use mocking libraries (e.g., responses, respx, pytest-mock, pyfakefs, VCR.py) without triggering violations because mocks intercept at the library layer before reaching the actual resource:
import pytest
import responses
import requests
@pytest.mark.small
@responses.activate
def test_api_with_mock():
"""This is hermetic – no real network call is made."""
responses.add(
responses.GET,
"https://api.example.com/users",
json={"users": []},
status=200,
)
response = requests.get("https://api.example.com/users")
assert response.json() == {"users": []}
For filesystem operations in small tests, use pyfakefs or in‑memory objects such as io.StringIO / io.BytesIO.
Gradual enforcement rollout
You don’t have to go strict on day one:
# pyproject.toml
# Week 1: Discovery – see what would fail
[tool.pytest.ini_options]
test_categories_enforcement = "off"
# Week 2‑4: Migration – fix violations incrementally
test_categories_enforcement = "warn"
# Week 5+: Enforced – violations fail the build
test_categories_enforcement = "strict"
Parallel execution support
Run your categorized tests in parallel with full support for pytest-xdist:
pytest -n auto
Distribution stats are aggregated correctly across workers, and timer isolation prevents race conditions.
Export machine‑readable reports
Generate JSON reports for dashboards and CI integration:
pytest --test-size-report=json --test-size-report-file=report.json
{
"summary": {
"total_tests": 150,
"distribution": {
"small": { "count": 120, "percentage": 80.0 },
"medium": { "count": 22, "percentage": 14.67 },
"large": { "count": 8, "percentage": 5.33 }
},
"violations": {
"timing": 0,
"hermeticity": {
"network": 0,
"filesystem": 0,
"subprocess": 0,
"database": 0,
"sleep": 0,
"total": 0
}
}
}
}
Installation & quick start
pip install pytest-test-categories
Enable enforcement in your pyproject.toml:
[tool.pytest.ini_options]
test_categories_enforcement = "warn"
Resources
- 📦 PyPI
- 📖 Documentation