How We Made Our E2E Tests 12x Faster
Source: Dev.to
Our Playwright end‑to‑end test suite has 15 tests across 5 spec files. They run sequentially because some tests have ordering dependencies — an upload creates a video that later tests verify. The suite was taking around 90 seconds per run. Most of that time was spent doing the same thing: logging in through the UI.
We got it down to 7 seconds. Here’s how.
The bottleneck
Eight of the fifteen tests need authentication. Each one called loginViaUI() before doing anything:
export async function loginViaUI(page: Page): Promise {
await page.goto("/login");
await page.getByLabel("Email").fill(TEST_USER.email);
await page.getByLabel("Password").fill(TEST_USER.password);
await page.getByRole("button", { name: "Sign in" }).click();
await page.waitForURL("/");
}
Navigate to the login page → wait for it to load → fill the email → fill the password → click the button → wait for the redirect. That’s 2‑3 seconds per test, times eight — roughly 20 seconds of typing into login forms.
The remaining time came from Docker health checks with conservative timings and a CI wait loop that polled every 3 seconds.
What we tried first: storageState
Playwright has a built‑in solution for this: storageState. The idea is to log in once in a global setup, save the browser’s cookies and localStorage to a JSON file, and load that file for every test. Tests start authenticated without touching the login page.
We implemented it exactly as the docs describe:
// global-setup.ts
import { chromium } from "@playwright/test";
import { join } from "path";
const browser = await chromium.launch();
const page = await browser.newPage();
await page.goto(`${baseURL}/login`);
await page.getByLabel("Email").fill(TEST_USER.email);
await page.getByLabel("Password").fill(TEST_USER.password);
await page.getByRole("button", { name: "Sign in" }).click();
await page.waitForURL(`${baseURL}/`);
await page.context().storageState({
path: join(__dirname, ".auth", "user.json"),
});
await browser.close();
// playwright.config.ts
export default defineConfig({
projects: [
{
name: "chromium",
use: {
storageState: "./e2e/.auth/user.json",
},
},
],
});
Result: It didn’t work. Tests still landed on the login page as if no authentication existed.
Why storageState failed
Our auth system stores the access token in a module‑level variable — not in localStorage or sessionStorage:
let accessToken: string | null = null;
The refresh token is an HTTP‑only cookie. On page load, a ProtectedRoute component checks for the access token in memory. If it’s missing, it calls tryRefreshToken(), which POSTs to /api/auth/refresh with the cookie. If that succeeds, the access token is set in memory and the user is authenticated.
storageState captured the refresh‑token cookie correctly, but the server rotates the refresh token on each use. The first test consumes the token, receives a new one, and the old token becomes invalid. Subsequent tests load the same (now‑invalid) token from the file, the server rejects it, and the test is redirected to the login page.
Thus, storageState assumes tokens remain valid across contexts, while our refresh‑token rotation assumes each token is used exactly once. The two are fundamentally incompatible.
What actually worked: API login
Instead of avoiding the login entirely, we made it fast. Playwright’s page.request API lets you make HTTP calls that share cookies with the browser context. A single POST to the login endpoint sets the refresh‑token cookie — no page navigation, no DOM interaction:
export async function loginViaAPI(page: Page): Promise {
for (let attempt = 0; attempt {
test.beforeEach(async ({ page }) => {
await loginViaAPI(page);
});
// …tests…
});
The auth spec still uses loginViaUI because it’s testing the actual login UI flow — form rendering, error messages, redirects. Those tests need to exercise the real login page, and that’s intentional.
Connection pooling (bonus)
Our test helpers previously opened a new database connection for every query:
export async function query(sql: string, params?: unknown[]): Promise {
const client = new pg.Client({ connectionString: DATABASE_URL });
await client.connect();
try {
await client.query(sql, params);
} finally {
await client.end();
}
}
Creating and tearing down a client for each call added unnecessary latency. Switching to a single pooled client (e.g., pg.Pool) reduced the overhead dramatically and contributed to the overall speed‑up.
TL;DR
storageStatecan’t be used when the server rotates refresh tokens.- A fast, reliable solution is to log in via the Playwright API (
page.request) with a small retry loop. - Replace UI logins with
loginViaAPIinbeforeEachhooks, keep UI‑login tests for the actual login flow. - Use a connection pool for DB helpers to shave off extra milliseconds.
Result: 90 s → 7 s total runtime for the suite. 🚀
```ts
await client.query(sql, params);
} finally {
await client.end();
}
Every query opened a new TCP connection, performed the TLS handshake (if applicable), authenticated, executed the query, and closed the connection. For global setup and teardown — which each run a TRUNCATE — that’s two full connection lifecycles.
Replacing pg.Client with pg.Pool keeps connections alive across calls:
const pool = new pg.Pool({ connectionString: DATABASE_URL, max: 3 });
export async function query(sql: string, params?: unknown[]): Promise {
await pool.query(sql, params);
}
export async function closePool(): Promise {
await pool.end();
}
The pool is closed in globalTeardown after the final table truncation. This saves around 100‑200 ms per database call — small individually, but it eliminates unnecessary overhead.
Faster health checks
The Docker Compose e2e stack had a conservative health‑check configuration:
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/api/health"]
interval: 5s
timeout: 5s
start_period: 15s
The Go binary starts in under a second. A 15‑second start_period adds 14 seconds of waiting. We reduced it to 5 seconds and the check interval from 5 seconds to 2 seconds:
healthcheck:
interval: 2s
start_period: 5s
The CI workflow had a similar problem. After Docker Compose reported healthy, a shell loop polled the health endpoint as a safety net:
for i in $(seq 1 60); do
if curl -sf http://localhost:8080/api/health > /dev/null 2>&1; then
echo "App is healthy!"
exit 0
fi
sleep 3 # was 3, now 1
done
The loop also dumped garage‑init logs and tested S3 connectivity on every successful health check — debugging artifacts from the initial setup that we removed.
Results
| Change | Time saved |
|---|---|
| API login (8 tests) | ~20 s |
| Health check timing | ~15 s |
| CI wait loop | ~20 s |
| Connection pooling | ~1 s |
The suite now runs in about 7 seconds locally, down from 90+ seconds. In CI, the total e2e job time drops by roughly a minute, including stack startup.
Key insight: The obvious Playwright solution (storageState) didn’t work because of how our auth system manages tokens. The actual fix was simpler — skip the browser, call the API directly, and handle rate limits. Sometimes the right optimization isn’t eliminating the work; it’s doing it more efficiently.
Try it
SendRec is open source (AGPL‑3.0) and self‑hostable. Check the e2e test suite, pull the image from Docker Hub, or browse the source code.
