AWS re:Invent 2025 - Fixing AI’s Confidently Wrong Problem in the Enterprise (AIM269)

Published: (December 5, 2025 at 10:43 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

AWS re:Invent 2025 – Fixing AI’s Confidently Wrong Problem in the Enterprise (AIM269)

The speaker discusses AI’s critical flaw of being “confidently wrong,” which erodes business users’ trust in AI systems. Using PromptQL’s approach, the solution shows what the AI knows (blue links to wiki entries) versus what it doesn’t know (red links for assumptions). When the AI encounters unknown concepts—e.g., “FY”—it makes transparent assumptions and invites experts to clarify, creating a learning loop that captures tribal knowledge. A collaborative wiki, updated by both AI and humans, enables continuous improvement. This transparency lets users trust AI even at 50 % accuracy because the system admits uncertainty.

Traditional text‑to‑SQL products often fail because they require technical verification. PromptQL’s system scales to 100 000 tables and 10 000 metrics by establishing a trust‑and‑learn loop first.

Main Part

Thumbnail 0

The Core Problem: AI’s Confident Wrongness and the Trust Gap

The speaker highlights a widespread issue: AI often provides answers with high confidence even when it’s wrong. This “confidently wrong” behavior prevents adoption in high‑impact business scenarios. While many have experimented with connecting LLMs to databases, the lack of a mechanism for the model to indicate uncertainty makes the output unreliable for decision‑making.

Key points

  • AI’s over‑confidence turns promising pilots into mistrusted tools.
  • Users need the system to say “I don’t know” so they can intervene and teach the model.
  • Trust is essential for scaling AI across large data estates (e.g., 100 k tables, 10 k metrics).

The speaker references the MIT “95 % Pilots Failed” report and stresses that the problem isn’t just model quality—it’s the missing feedback loop that allows humans to correct AI.

“You can only teach something when it says it doesn’t know.”

PromptQL’s Solution: Teaching AI to Admit What It Doesn’t Know

PromptQL builds a “chat‑with‑your‑data” product that explicitly distinguishes known facts (blue wiki links) from assumptions (red links). When the AI encounters an unknown term, it:

  1. Marks the term as uncertain and presents it as a red link.
  2. Invites subject‑matter experts to provide the correct information.
  3. Updates a collaborative wiki that both the AI and humans can edit.

This loop creates a continuously improving knowledge base, allowing the AI to become more reliable over time while maintaining transparency for end users.

The content above reflects the original presentation material and may contain minor transcription errors.

Back to Blog

Related posts

Read more »

Daily Tech News Roundup - 2025-12-06

Daily Tech News Roundup Welcome to your daily dose of tech news! Today, we're covering everything from bizarre art installations featuring robot dogs to the la...

Switching account

@blink_c5eb0afe3975https://dev.to/blink_c5eb0afe3975 As you guys know that I am starting to log my progress again I thought it would be best to do it on a diffe...