Can You Take Me Into Your Brain?

Published: (February 23, 2026 at 05:18 PM EST)
9 min read
Source: Dev.to

Source: Dev.to

We built AI to understand intelligence. But is it actually teaching us about ourselves — or are we just projecting?

Someone once asked Gu Ailing — the freestyle skier who competes as both Gu Ailing and Eileen, who holds two passports, two languages, two identities forged into one extraordinary athlete — “can you take me into your brain?”

She laughed. Then she gave one of the most precise answers I’ve heard from any athlete, or any person:

“I try to understand my brain.”

That’s metacognition – the ability to think about your own thinking, to observe yourself learning, adjusting, failing, recovering, and to use that observation to get better. It’s not a casual answer. It’s a practice.

I’ve been thinking about that question ever since. Not about her specifically, but about what it means to even ask it. We want to understand how exceptional minds work, how decisions get made, how someone becomes themselves.


The uncomfortable truth

We don’t know. Not really. Not for her, not for you, not for me.

We don’t fully understand biases, emotions, or why you bought that thing you absolutely did not need — but it was so cute. We can’t fully explain why some feedback lodges in us for decades while a compliment evaporates in hours. We don’t know why dogs make everything better. (That last one might just be me.)

And yet — right now, in labs, startups, and university basements around the world — people are trying to build something that does understand these things, or at least something that can simulate understanding well enough to fool us.

The question this article investigates:

In that attempt, are we accidentally learning something real about ourselves? Or are we just building a very expensive mirror that shows us what we want to see?


Why DeepMind matters

Demis Hassabis didn’t found DeepMind to build a better search engine. He was explicit in early interviews: the goal was to understand intelligence itself, not to replicate human behavior, but to understand the process by which minds arrive at answers.

That’s a strikingly different ambition. It’s not “build something smart.” It’s “figure out what smart actually is.”

To do that, you have to make choices: pick a theory of how learning works, encode it mathematically, and see if it produces something that behaves intelligently.

  • If it doesn’t, you revise the theory.
  • If it does — partially, imperfectly — you ask why.

This is, in structure, exactly what science does. And what makes it interesting for our purposes is this: the theories they chose to test were largely borrowed from what we already believed about human cognition.

So the question becomes: when those theories work — when the AI actually learns — does that tell us something true about how we learn too?


The dominant AI approach

The modern AI playbook is simple in principle:

  1. Expose a system to vast amounts of data.
  2. Reward it when it gets things right, penalize it when it doesn’t.
  3. Repeat.

Over time, the system adjusts, gets better, and generalizes.

This is reinforcement learning at its broadest. If you squint — or actually, if you look at it directly — it describes human childhood with uncomfortable accuracy.

  • We are exposed to data: the things we see, hear, experience before we have language for any of it.
  • We receive feedback: praise, criticism, silence, warmth.
  • Our systems adjust.
  • We generalize — often in ways we don’t realize until decades later, when we notice we flinch at a particular tone of voice, can’t accept a compliment, or consistently choose partners who feel familiar in ways that are not good for us.

A thought experiment

Ask an AI “what am I?

If the only data it has about you is someone telling you that your Lego obsession is a waste of time, that you think too differently, that you don’t fit — it will reflect that back as truth, undermining what you actually are.

Feed it ten years of your work, ideas, output — and it updates.

But feed it ten years of that dismissal alongside evidence of what you’re capable of, and the model gets confused. It has conflicting training signals and over‑fits to the louder, more repeated data.

Irony: the very kind of thinking that gets dismissed — lateral, pattern‑obsessed, building‑things‑to‑understand‑them thinking — is precisely what’s driving the AI revolution right now. Except the model doesn’t know that. It only knows what it was fed. And for a long time, we fed it a very narrow definition of what counts as intelligence.

In machine learning, we call that a corrupted training set.

In humans, we call it something else. But the mechanism — the mathematical mechanism — may be the same.


Gradient descent: the engine of modern AI

Gradient descent measures how wrong the model is (the loss) and nudges every parameter in the direction that reduces that wrongness. Not a leap. A nudge. Repeat millions of times.

What you get is a model that is incrementally, persistently, directionally improving — not because it had a breakthrough, but because it kept taking small steps toward less error.

There’s no dramatic moment, no single piece of feedback that rewires everything. Just the accumulation of small adjustments, over time, in a consistent direction.

If you’ve ever been in therapy, tried to change a deeply held belief, or trained for something that required both physical and psychological conditioning — you may recognize this. The change doesn’t feel like change while it’s happening. It feels like nothing. Until one day it doesn’t.


The uncomfortable part

AI systems trained on human data don’t just learn human intelligence. They learn human bias. They absorb racism, sexism, cultural assumptions, historical injustices — because all of that is in the data.

When researchers discovered that certain hiring algorithms were discriminating against women, it wasn’t because someone programmed discrimination; it was because the training data encoded it.


The article continues… (the original excerpt ends abruptly).

AI, Bias, and Human Cognition

It was because the model learned from historical hiring decisions made by humans who discriminated.
The AI was a perfect student. It learned exactly what it was taught.

This is the part that should stop us. Because if AI is a mirror of human cognition — it is also a mirror of human failure. Of the beliefs we didn’t know we held. Of the patterns we normalized.

It is, in that sense, the most honest thing we’ve ever built. And honesty, it turns out, is uncomfortable.

Here’s what a skeptic would say — and they wouldn’t be wrong to say it.

The parallels above are suggestive. They’re not proof. The fact that reinforcement learning resembles human conditioning doesn’t mean they’re the same thing. Analogy is not mechanism. A river and a highway both get you from A to B; that doesn’t mean they work the same way.

Human cognition involves embodiment, emotion, consciousness — things we don’t remotely understand and haven’t come close to replicating in machines. When an AI “learns,” there’s no felt experience of learning. No confusion before clarity. No 3 am moment of realization. Just matrix multiplication, at scale, very fast.

There’s also a more cynical read: maybe we keep finding human metaphors for AI because we are human, and human metaphors are all we have. We called early computers “electronic brains.” We said they “remembered” and “forgot.” We personified them because that’s what we do — and now we’re doing it again, more elaborately, with more mathematical justification.

Maybe the lesson AI is teaching us about ourselves is just this: we really, really want to understand ourselves, and we’ll use whatever tools are available to tell that story.

That’s not nothing. But it’s different from saying AI is actually revealing the architecture of human thought.


Where I Land

This is where I land — not as a conclusion, but as an honest position:

  • The parallels are too specific to dismiss entirely.
  • The corrupted training set as a model for trauma.
  • Gradient descent as a model for slow psychological change.
  • Bias as a mirror of collective human assumption.

These aren’t vague metaphors. They’re mathematical structures that someone chose to encode because they believed they reflected something true about learning — and then they worked.

That’s not proof. But it’s a data point.

What I think is actually happening is something more interesting than either “AI explains the human mind” or “we’re just projecting.” I think we are in the early stages of a dialogue.

  1. We build AI based on theories of human cognition.
  2. The AI behaves in ways that surprise us.
  3. We look at what surprised us and revise our understanding of cognition.
  4. We build better AI.

Repeat.

It’s a feedback loop. And feedback loops, as we’ve established, are how learning works.


A Provocative Question

Would you trust AI with your kid?

I ask that not as provocation but as a genuine diagnostic. Because we already did — informally, gradually, without deciding to. We handed algorithms our children’s attention, their searches, their feeds, their sense of what bodies look like and what success means and who gets to be the hero of the story.

We didn’t ask if that was wise. We just did it, the way we handed them Google before that, and television before that.

The difference now is that the systems are more capable, more personalized, and — if the parallels above hold — potentially more formative than anything that came before.

“I try to understand my brain.” – Gu Ailing

Metacognition. Thinking about thinking. It turns out that’s also what building AI forces us to do — whether we mean to or not.

I think that’s the right response to all of this: the laugh that acknowledges how strange and impossible the question is, and then the genuine attempt to answer anyway — not because we’ll succeed, but because the attempt is where the learning is.

We built AI to understand intelligence.

Whether it’s teaching us about ourselves — that part is still being written.


This article is not bullet‑proof. Shoot.

Written with Claude. Investigated by a human brain. Both works in progress.

Find me in public: LinkedIn

0 views
Back to Blog

Related posts

Read more »

A Discord Bot that Teaches ASL

This is a submission for the Built with Google Gemini: Writing Challengehttps://dev.to/challenges/mlh/built-with-google-gemini-02-25-26 What I Built with Google...

AWS who? Meet AAS

Introduction Predicting the downfall of SaaS and its providers is a popular theme, but this isn’t an AWS doomsday prophecy. AWS still commands roughly 30 % of...