The Invisible Jury

Published: (December 3, 2025 at 07:00 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Case Study: Derek Mobley

Derek Mobley thought he was losing his mind. A 40‑something African American IT professional with anxiety and depression, he applied to over 100 jobs in 2023, each time watching his carefully crafted applications disappear into digital black holes. No interviews. No callbacks. Just algorithmic silence. What Mobley didn’t know was that he wasn’t being rejected by human hiring managers—he was being systematically filtered out by Workday’s AI screening tools, invisible gatekeepers that had learned to perpetuate the very biases they were supposedly designed to eliminate.

Mobley’s story became a landmark case when he filed suit in February 2023 (later amended in 2024), taking the unprecedented step of suing Workday directly—not the companies using their software—arguing that the HR giant’s algorithms violated federal anti‑discrimination laws. In July 2024, U.S. District Judge Rita Lin delivered a ruling that sent shockwaves through Silicon Valley’s algorithmic economy: the case could proceed on the theory that Workday acts as an employment agent, making it directly liable for discrimination.

The implications were staggering. If algorithms are agents, then algorithm makers are employers. If algorithm makers are employers, then the entire AI industry suddenly faces the same anti‑discrimination laws that govern traditional hiring.

Algorithmic Adjudication

We are living through the greatest delegation of human judgment in history. An estimated 99 % of Fortune 500 companies now use some form of automation in their hiring process. Banks deploy AI to approve or deny loans in milliseconds. Healthcare systems use machine learning to diagnose diseases and recommend treatments. Courts rely on algorithmic risk assessments to inform sentencing decisions. Platforms like Facebook, YouTube, and TikTok use AI to curate the information ecosystem that shapes public discourse.

This delegation isn’t happening by accident—it’s happening by design. AI systems can process vast amounts of data, identify subtle patterns, and make consistent decisions at superhuman speed. They don’t get tired, have bad days, or harbor conscious prejudices. In theory, they represent the ultimate democratization of decision‑making: cold, rational, and fair.

Scope of Algorithmic Decision‑Making

The reality is far more complex. These systems are trained on historical data that reflects centuries of human bias, coded by engineers who bring their own unconscious prejudices, and deployed in contexts their creators never anticipated. The result is what Cathy O’Neil, author of Weapons of Math Destruction, calls “algorithms of oppression”—systems that automate discrimination at unprecedented scale.

  • Hiring: University of Washington research examined over 3 million combinations of résumés and job postings, finding that large language models favored white‑associated names 85 % of the time and never favored Black‑male‑associated names over white‑male names.
  • Housing: SafeRent’s AI screening system allegedly discriminated against applicants based on race and disability, leading to a $2.3 million settlement in 2024 when courts found the algorithm unfairly penalized applicants with housing vouchers.
  • Healthcare: AI diagnostic tools trained primarily on white patients miss critical symptoms in people of color.
  • Criminal Justice: Risk assessment algorithms like COMPAS have been shown to falsely flag Black defendants as high‑risk at nearly twice the rate of white defendants.

When algorithms decide who gets a job, a home, medical treatment, or freedom, bias isn’t just a technical glitch—it’s a systematic denial of opportunity.

Transparency and the Right to Explanation

The fundamental challenge with AI‑driven decisions isn’t just that they might be biased—it’s that we often have no way to know. Modern machine learning systems, particularly deep neural networks, are essentially black boxes. They take inputs, perform millions of calculations through hidden layers, and produce outputs. Even their creators can’t fully explain why they make specific decisions.

European Regulatory Framework

The European Union recognized this problem and embedded a “right to explanation” in both the General Data Protection Regulation (GDPR) and the AI Act, which entered force in August 2024.

  • Article 22 of GDPR states that individuals have the right not to be subject to decisions “based solely on automated processing” and must be provided with “meaningful information about the logic involved.”
  • The AI Act requires “clear and meaningful explanations of the role of the AI system in the decision‑making procedure” for high‑risk AI systems that could adversely impact health, safety, or fundamental rights.

In 2024, a European Court of Justice ruling clarified that companies must provide “concise, transparent, intelligible, and easily accessible explanations” of their automated decision‑making processes. However, companies can still invoke trade‑secrets to protect their algorithms, creating a fundamental tension between transparency and intellectual property.

Technical Challenges of Explainability

How do you explain a decision made by a system with 175 billion parameters? How do you make transparent a process that even its creators don’t fully understand?

Researchers have developed various approaches to explainable AI (XAI), from post‑hoc explanation methods like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model‑agnostic Explanations) to inherently interpretable models. Each approach involves trade‑offs:

  • Simpler, more explainable models may sacrifice 8‑12 % accuracy according to recent research.
  • More sophisticated explanation methods can be computationally expensive and still provide only approximate insights into model behavior.

Even when explanations are available, they may not be meaningful to the people affected by algorithmic decisions. Telling a loan applicant that their application was denied because “the model detected a subtle pattern in the data” does little to satisfy a need for accountability or to enable remediation.

Moving Forward

Addressing algorithmic bias and opacity requires coordinated action across law, policy, engineering, and civil society:

  1. Regulatory enforcement of existing rights to explanation and non‑discrimination.
  2. Standardized auditing frameworks that can be applied across industries.
  3. Investment in inherently interpretable models for high‑stakes domains.
  4. Public awareness of the pervasiveness of algorithmic decision‑making and the avenues for recourse.

Only by treating AI systems as accountable actors—subject to the same legal and ethical standards as human decision‑makers—can we ensure that the invisible jury does not become an instrument of systemic oppression.

Back to Blog

Related posts

Read more »