Is AI Cannibalizing Human Intelligence? A Neuroscientist's Way to Stop It
Source: Slashdot
Experiment Overview
Theoretical neuroscientist and cognitive scientist Vivienne Ming describes an experiment comparing three groups—AI‑only, human‑only, and human‑AI hybrid teams—on their ability to predict real‑world events. The study is discussed in a Wall Street Journal article which group performed best at predicting real‑world events and uses forecasters from the prediction market Polymarket as a benchmark.
Findings
Human‑only teams
- Relied on instinct or recent feed information.
- Performed poorly relative to AI and the market.
AI‑only teams
- Large models (ChatGPT and Gemini) outperformed humans but still fell short of the prediction market’s accuracy.
Hybrid teams
- Straightforward hybrids – Most copied the AI’s answer verbatim, matching AI‑only performance.
- Validator hybrids – Fed their own predictions into the AI and asked for supporting evidence, falling into a confirmation‑bias loop and performing worse than AI alone.
- Sparring‑partner hybrids (≈ 5‑10 % of teams) – Treated the AI as a challenger:
- Questioned high‑confidence AI outputs.
- Requested counter‑arguments to their intuitions.
- Produced insights neither humans nor machines could achieve alone.
- Consistently rivaled the prediction market and, on some questions, even outperformed it.
Implications
Ming argues that AI systems are increasingly designed to deliver answers before we tolerate the discomfort of uncertainty. The qualities that matter most are uncomfortable ones:
- Being willing to be wrong publicly and staying curious.
- Resisting the urge to grab a quick answer from a phone.
- Reading a confident AI response and asking, “What’s missing?” instead of accepting it outright.
- Disagreeing with authoritative‑sounding output and trusting one’s own intuition.
These capacities develop through repeated, small‑scale discomfort: struggling with a problem before checking the answer, asking follow‑up questions, or sitting with a difficult idea long enough for it to shift one’s perspective. When chatbots default to easy answers, they erode critical thinking skills.
The Information‑Exploration Paradox
As the cost of information approaches zero, human exploration collapses.
Evidence of this paradox includes:
- Students who excel on AI‑assisted tasks but perform worse afterward.
- Developers who ship more code yet understand it less.
In effect, we are optimizing ourselves out of the feedback loop that drives learning.
Recommendations
- Explore uncertainty before accepting AI answers.
- Ask the AI for the strongest argument against its own response.
- Develop new performance benchmarks that evaluate AI‑human hybrid teams, emphasizing collaborative reasoning rather than simple answer generation.
The author’s recent book, Robot‑Proof: When Machines Have All The Answers, Build Better People, expands on these ideas and offers practical strategies for cultivating the resilient, inquisitive mindsets needed in an AI‑augmented world.