The Senior QA’s Manifesto: How I Decide What NOT to Automate

Published: (January 7, 2026 at 12:49 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

After 6+ years in QA, I’ve realized that high coverage is often just a vanity metric. Some of the best engineering teams I’ve worked with have lower UI coverage because they prioritize pipeline speed over script count.

Automation isn’t “free” time—you pay for it in maintenance and frustration every time a developer has to wait for a build to finish, only for it to fail because of a flaky selector. A flaky test is worse than no test at all; it creates noise that the team eventually starts to ignore. Once developers stop trusting the red lights in CI, the QA process is effectively dead.

When I skip automation

  • Vibrating features – If the UI or requirements change every few days, you’re writing throw‑away code. I wait until the feature has survived at least two sprints without a major logic change before automating it.
  • Nightmare setups – If testing a single “Submit” button requires seeding 10 databases, bypassing two‑factor authentication, and mocking 5 external APIs, the ROI isn’t there. I’ll spend 30 seconds checking it manually instead of two days debugging a brittle script.

Test‑selection strategy

The “Must‑Automate” List

  • Complex calculations – Humans are prone to errors with tax logic, currency conversions, etc. Machines don’t get tired of math.
  • Smoke test – The “happy path” that proves the app starts and a user can log in. This is the heartbeat of your pipeline.
  • Data integrity – Verify that data entered in step 1 actually appears in the database in step 10.

The “Do‑Not‑Touch” List

  • Visual “feel” – A script can tell you a button is is_visible(), but it won’t detect overlapping text or unreadable fonts on a 13‑inch laptop.
  • One‑and‑done features – Seasonal promotions that last two weeks don’t merit three days of scripting.
  • Driving every test through the browser – This is slow, expensive, and fragile. For example, verifying that a user’s profile updated does not require clicking through the entire UI each time.

UI‑heavy vs. balanced approach

// UI‑heavy (fragile)
test('user updates profile - the slow way', async ({ page }) => {
  await page.goto('/settings');
  await page.fill('#bio-input', 'New Bio');
  await page.click('.save-button-variant-2'); // This selector will break eventually
  await expect(page.locator('.success-toast')).toBeVisible();
});
// Balanced (faster and stable)

// 1. Check the logic via API (milliseconds)
test('profile update data integrity', async ({ request }) => {
  const response = await request.patch('/api/user/profile', {
    data: { bio: 'New Bio' }
  });
  expect(response.ok()).toBeTruthy();
});

// 2. Check the UI once (does the button work?)
test('save button triggers action', async ({ page }) => {
  await page.goto('/settings');
  await page.click('button:has-text("Save")');
  // No DB check needed; the API test already covered that.
});

E‑commerce checkout example

A typical checkout flow includes:

  1. Adding an item to a cart.
  2. Entering a shipping address.
  3. Entering a credit card.
  4. Verifying the order confirmation.

Automating this purely through the UI introduces 40–50 locators that could break. A single network hiccup or minor CSS change can stop the entire build.

My approach:

  • Automate the “Add to Cart” and “Checkout” API calls to ensure the backend works.
  • Perform a quick manual “sanity check” on the UI across different browser resolutions.

This keeps the pipeline green and developers happy.

Benefits of selective automation

  • Faster and more reliable CI pipelines
  • Fewer false failures and re‑runs
  • Higher developer trust in test results
  • Reduced maintenance cost
  • Faster release cycles with lower risk

The goal isn’t maximum coverage—it’s maximum confidence. If a test slows delivery without meaningfully reducing risk, it doesn’t belong in the pipeline.

Back to Blog

Related posts

Read more »