Why Traditional Linters Miss Critical Bugs (And What AI Can Do About It)

Published: (December 15, 2025 at 09:49 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

THE LAYERS OF DEFENSE

Modern software development has multiple layers of bug detection:

Layer 1: Linters (ESLint, Pylint, RuboCop)
What they catch: Syntax errors, style violations, simple patterns
What they miss: Logic errors, security vulnerabilities, performance issues

Layer 2: Type Checkers (TypeScript, Flow, mypy)
What they catch: Type mismatches, undefined variables
What they miss: Runtime errors, business‑logic bugs

Layer 3: Unit Tests
What they catch: Regressions, broken functionality
What they miss: Edge cases, integration issues

Layer 4: Code Review
What they catch: Architecture problems, design issues
What they miss: Subtle bugs (humans are fallible)

But there’s a gap: bugs that are syntactically valid, type‑safe, pass tests, and look correct to human reviewers.

THE BLIND SPOT

Example 1: Missing await

async function getUsers() {
  const response = await fetch('/api/users')
  const users = response.json() // BUG: Missing await
  return users
}
  • ESLint: ✅ No errors
  • TypeScript: ✅ No errors (if response is any)
  • Tests: ✅ Might pass (if tests don’t check data type)
  • Code Review: ❌ Easy to miss

Result: users is a Promise, not an array. Any code expecting users.length or users.map() will fail.

Example 2: SQL Injection

app.get('/user', (req, res) => {
  const query = `SELECT * FROM users WHERE email = '${req.query.email}'`
  db.execute(query)
})
  • ESLint: ✅ No errors
  • TypeScript: ✅ No errors
  • Tests: ✅ Pass (tests use safe inputs)
  • Code Review: ❌ Might miss if reviewer isn’t security‑focused

Result: Critical security vulnerability. An attacker can execute arbitrary SQL:

GET /user?email=' OR '1'='1
GET /user?email='; DROP TABLE users; --

Example 3: Memory Leak

function setupWebSocket() {
  const ws = new WebSocket('wss://api.example.com')
  ws.on('message', handleMessage)
  return ws
}

setInterval(() => {
  setupWebSocket()
}, 5000)
  • ESLint: ✅ No errors
  • TypeScript: ✅ No errors
  • Tests: ✅ Pass (short‑lived test environment)
  • Code Review: ❌ Might miss

Result: New WebSocket created every 5 seconds, old ones never closed. After 1 hour: 720 open connections → out‑of‑memory crash.

Example 4: Race Condition

async function processItems(items) {
  items.forEach(async item => {
    await saveToDatabase(item)
  })
  console.log('All items processed!')
}
  • ESLint: ✅ No errors
  • TypeScript: ✅ No errors
  • Tests: ❌ Might fail intermittently (race condition)
  • Code Review: ❌ Looks reasonable

Result: forEach doesn’t await async callbacks. The console.log runs immediately, before any items are actually saved, leading to possible data loss if the process exits early.

WHY TRADITIONAL TOOLS MISS THESE

Pattern‑Matching Limitations

Linters use abstract syntax trees (ASTs) and simple pattern matching:

IF code matches pattern X
THEN flag error Y

This works for syntax errors but fails for semantic errors that require understanding what the code does.
Example: A linter can detect var x = x + 1 (using a variable before declaration) but can’t detect const users = response.json() (missing await) because both are syntactically valid.

No Context Understanding

Traditional tools analyze code in isolation. They don’t know:

  • What a function is supposed to accomplish
  • What values a variable might hold at runtime
  • How different parts of the codebase interact
  • Common security vulnerabilities
  • Performance implications

Example: A linter sees query = "SELECT * FROM users WHERE id = " + userId as a valid string concatenation. It doesn’t realize that concatenating user input into SQL creates injection risk.

Language‑Specific Tooling

Each linter targets a single language (ESLint for JavaScript, Pylint for Python, RuboCop for Ruby, etc.). This leads to:

  • Separate tools per language
  • Divergent rule sets and configurations
  • Inconsistent results across a polyglot codebase
  • Higher maintenance burden

THE AI APPROACH

Large language models (LLMs) such as GPT‑4 offer a different strategy:

Context‑Aware Analysis

Instead of pattern matching, LLMs reason about code semantically. They can infer:

  • The intended behavior of a function
  • Potential runtime failures
  • Security implications
  • Performance characteristics
  • Language‑specific best practices

Example: GPT‑4 sees const users = response.json() and understands that response.json() is asynchronous, so without await the variable will be a Promise.

Multi‑Language Support

A single LLM can analyze code in any language it was trained on, eliminating the need for separate linters per language.

Explanatory Output

LLMs don’t just flag an issue; they explain:

  • Why something is a bug
  • What problems it can cause
  • How to fix it correctly
  • When the pattern is appropriate
  • Where to learn more

This makes the feedback educational rather than merely corrective.

REAL‑WORLD EXAMPLES

Example 1: Async/Await Bug

Input

async function fetchUserData(userId) {
  const response = await fetch(`/api/users/${userId}`)
  const user = response.json()
  return user.name
}

AI Analysis

  • Issue: Missing await before response.json().

  • What’s wrong: response.json() returns a Promise; user becomes a Promise instead of the parsed data.

  • Result: user.name tries to read .name on a Promise, yielding undefined or a runtime error (Cannot read property 'name' of undefined).

  • Fix:

    const user = await response.json()
  • Why: Parsing JSON can be CPU‑intensive; making it async prevents blocking the main thread.

  • Best practice: Always await Promise‑returning methods such as .json(), .text(), .blob().

Example 2: Security Vulnerability (Python)

Input

def get_user_by_email(email):
    query = f"SELECT * FROM users WHERE email = '{email}'"
    return database.execute(query)

AI Analysis

  • Issue: SQL injection vulnerability.

  • What’s wrong: User‑supplied email is interpolated directly into the SQL string.

  • Attack scenario:

    email = "' OR '1'='1"
    # Query becomes: SELECT * FROM users WHERE email = '' OR '1'='1'

    Returns all users, bypassing authentication.

  • Worse attack:

    email = "'; DROP TABLE users; --"
    # Query becomes: SELECT * FROM users WHERE email = ''; DROP TABLE users; --

    Deletes the entire users table.

  • Why critical: SQL injection is an OWASP Top‑10 vulnerability, leading to data breaches, loss, or full system compromise.

  • Fix (parameterized query):

    def get_user_by_email(email):
        query = "SELECT * FROM users WHERE email = %s"
        return database.execute(query, (email,))
  • Why safe: The database treats the parameter as data, automatically escaping special characters.

  • Never use: f‑strings, % formatting, or string concatenation for building SQL queries. Prefer parameterized queries or an ORM (e.g., User.objects.filter(email=email)).

Example 3: Performance Issue (JavaScript)

Input

function getActiveUsers() {
  const users = fetch('/api/active')
    .then(res => res.json())
    .then(data => data.filter(u => u.active))
  return users
}

AI Analysis

  • Issue: The function returns a Promise but callers may treat the result as an array.

  • What’s wrong: Without await or proper handling, downstream code might attempt to iterate over users immediately, causing errors or unexpected behavior.

  • Fix: Either make the function async and await the fetch, or clearly document that it returns a Promise.

    async function getActiveUsers() {
      const res = await fetch('/api/active')
      const data = await res.json()
      return data.filter(u => u.active)
    }
  • Performance note: Using await avoids the “callback hell” of chained .then() and makes error handling simpler.


By leveraging AI‑driven, context‑aware analysis, teams can catch the kinds of bugs that slip through traditional linters, type checkers, tests, and human reviews—ultimately delivering more reliable, secure, and performant software.

Back to Blog

Related posts

Read more »