How We Automate Accessibility Testing with Playwright and Axe
Source: Dev.to
Our Toolkit: Axe + Playwright
We chose Axe, an open‑source library from Deque Systems, as our accessibility testing engine. It provides a JavaScript API to run tests directly in the browser, and the @axe-core/playwright package makes integration seamless.
Since we already rely on Playwright for visual regression testing and our end‑to‑end suite, adding accessibility checks on top of that was the obvious next step—no new tools to learn, just extending a setup we know well with Axe’s engine running inside the same Playwright workflows.
Configuration
First, we created a helper to get a pre‑configured Axe instance. Our configuration focuses on WCAG 2.1 Level A and AA criteria.
What is WCAG? The Web Content Accessibility Guidelines (WCAG) are developed by the W3C to make web content more accessible.
- Level A: Minimum level of conformance.
- Level AA: Mid‑range level we target, addressing more advanced barriers.
- Level AAA: Highest, most stringent level.
We also explicitly exclude certain elements that are outside our direct control (e.g., third‑party advertisements) to avoid false positives.
// /test/utils/axe.ts
import { Page } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
export const getAxeInstance = (page: Page) => {
return new AxeBuilder({ page })
// Target WCAG 2.1 A and AA success criteria
.withTags(['wcag2a', 'wcag2aa'])
// Exclude elements we don't control, like ads
.exclude('[id^="google_ads_iframe_"]')
.exclude('#skinadvtop2')
.exclude('#subito_skin_id');
};
Implementation: Generating and Saving Reports
Next, we added a helper generateAxeReport that runs the analysis and writes the results to a JSON file.
// /test/utils/axe.ts
import { Page } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
import { Result } from 'axe-core';
import * as fs from 'fs';
import * as path from 'path';
import { getAxeInstance } from './axe'; // assuming same file
export const generateAxeReport = async (
name: string,
page: Page,
isMobile: boolean,
includeSelector?: string
) => {
let axe = getAxeInstance(page);
// Optionally scope the analysis to a specific selector
if (includeSelector) {
axe = axe.include(includeSelector);
}
const results = await axe.analyze();
const violations = results.violations;
// Save the results to a JSON file
await saveAccessibilityResults(name, violations, isMobile);
return violations;
};
async function saveAccessibilityResults(
fileName: string,
violations: Array,
isMobile: boolean
) {
const outputDir = 'test/a11y/output';
if (!fs.existsSync(outputDir)) {
fs.mkdirSync(outputDir, { recursive: true });
}
const filePath = path.join(
outputDir,
`${fileName}-${isMobile ? 'mobile' : 'desktop'}.json`
);
// Map violations to a clean object for serialization
const escapedViolations = violations.map((violation) => ({
id: violation.id,
impact: violation.impact,
description: violation.description,
help: violation.help,
helpUrl: violation.helpUrl,
nodes: violation.nodes,
}));
fs.writeFileSync(filePath, JSON.stringify(escapedViolations, null, 2));
console.log(`Accessibility results saved to ${filePath}`);
}
The A11y Test
With these helpers in place, adding an accessibility check to any Playwright test is straightforward.
// /test/a11y/example.spec.ts
import { test } from '@playwright/test';
import { generateAxeReport } from '../utils/axe';
test('check Login page', async ({ page }) => {
await page.goto('/login_form');
await page.waitForLoadState('domcontentloaded');
// Run the helper
await generateAxeReport('login-page', page, false);
});
Running the test generates a JSON report (e.g., login-page-desktop.json) containing all accessibility findings.

Integration with Continuous Integration (CI)
Our CI workflow triggers on every staging deployment. It:
- Runs the accessibility tests against a predefined list of critical pages.
- Generates the JSON reports.
- Updates or creates a dedicated GitHub Issue with the results whenever violations are detected.
The automated issue looks like this:

And the detailed list of violations:

Why a GitHub Issue? (And Not a Failing Build)
Unlike our visual regression tests, which open a PR and send a Slack notification, we chose to log accessibility findings in a GitHub Issue.
We are still building up our accessibility coverage, so failing the pipeline for every violation would be unsustainable. By using an issue:
- We keep a persistent record of the accessibility debt.
- The repository owner is responsible for triaging, prioritising, and scheduling fixes.
Below is an example pull request that addresses a record previously logged in the GitHub Issue.
(image omitted for brevity)