The Master Algorithm

Published: (April 7, 2026 at 01:03 PM EDT)
7 min read
Source: Dev.to

Source: Dev.to

The Master Algorithm – 2015 → 2025

In 2015 a book by AI researcher Pedro Domingos was published:

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

Domingos described the “five tribes” of artificial intelligence (AI) and speculated which might eventually become the Master Algorithm—the algorithm that could learn to do virtually anything humans and other animals can learn.

TribeDescription
Inductive reasoningSymbolic, rule‑based learning
Connectionism (aka neural networks)Distributed, weight‑based learning
Evolutionary computationPopulation‑based search & optimization
Bayesian networksProbabilistic graphical models
Analogical modellingCase‑based reasoning and analogy

At the time it was far from obvious which (if any) of these would turn out to be the universal learner.


The Master Algorithm is neural networks

It turns out the master algorithm is connectionism.
A neural network that is big enough, structured appropriately, and trained on enough data can do it all:

  • language understanding & generation
  • reasoning & problem solving
  • translation
  • programming & code generation
  • answering questions & following instructions
  • image & video perception and generation
  • complex mathematics, and much more

Critics often pick a single shortcoming and claim, “it doesn’t do this well, so it can’t be the master algorithm.”
But anyone who read Domingos’s book can see that everything we now take for granted was pure science‑fiction in 2015. Researchers were dreaming of machines that could perform half of what ChatGPT, Claude, or Gemma (running locally on a phone) can do today.

The other four tribes still have niche uses, but none have scaled to a general‑purpose learner. In fact, most of them break down when you try to simply make them larger.

This was not obvious.


My personal connectionist journey

I’ve considered myself a connectionist since middle school, when I built my first neural‑network science‑fair project.

  • Junior year: I entered the International Science & Engineering Fair with a novel algorithm for recursive neural networks, won the U.S. Army’s top prize, and earned a two‑week trip to Japan (I still practice Japanese daily).

Even with that background, I did not expect neural networks alone to become the master algorithm. Most people thought you’d need a hybrid of neural nets plus symbolic (inductive) reasoning.

Our brains are made of neurons, so in principle a neural network could emulate all cognition. Yet I assumed evolution gave us highly specialized circuits for language, logic, and reasoning—circuits that a generic network could not reproduce.

Reality check: a sufficiently large, well‑trained network does acquire those capabilities.


The turning point

In What Is Intelligence? (2025), Google AI researcher Blaise Agüera y Arcas describes the moment he saw the pivotal shift:

“We were training a neural network to predict the next word for a better autocomplete. The model was very large and trained on massive text corpora. Suddenly it started talking to us. We could ask it questions it had never seen in its training data—like translating a made‑up sentence—and it answered correctly.”

That moment revealed what many still deny: a deep language model is more than a parrot; it exhibits genuine understanding.

A raw language model without reinforcement learning is erratic—schizophrenic—switching personalities and producing incoherent dialogue. Behavioral fine‑tuning (e.g., RLHF) stabilizes it, improves reasoning, and yields the AI coworkers we use daily.


Prediction, all the way down

How does a neural network act as the master algorithm?

Intelligence, at its core, is prediction:

  • Cortical columns predict their next inputs.
  • The visual system predicts the next visual scene.
  • The language system predicts the next word we will hear or say.

Large language models (LLMs) like Claude literally predict the next token given the preceding context. That single operation, when scaled, yields:

  • Perception (vision, audio)
  • Planning & reasoning (by chaining predictions)
  • Action (generating code, controlling robots)

Thus, prediction alone is sufficient for general intelligence.


TL;DR

  • In 2015 Domingos identified five AI “tribes.”
  • By 2025 the connectionist tribe—neural networks—has proven to be the Master Algorithm.
  • Scaling up size, data, and training methods turns a simple predictor into a universal learner capable of language, reasoning, perception, and creation.

The master algorithm is here, and it’s a neural network.

A string of tokens working through a problem is not fundamentally different from the string of thoughts (mostly in the form of words) that you and I would generate while solving a similar problem.

Reinforcement learning biases the network so that these strings of predictions tend to go in useful directions—directions that were successful during training. In humans (unlike current LLMs), training is an ongoing process, so we constantly tweak our own neural networks to make these trains of predictions more successful. But fundamentally, moment‑to‑moment, it’s still just predicting the next thing at every level of the network.


AI‑Complete Problems

We now understand that some problems are what Agüera y Arcas calls AI‑complete. Solving an AI‑complete problem requires actually understanding the world that the problem represents; no shallower, surface‑features representation can do it. Examples include:

  • Next‑word prediction – impossible without understanding the meaning of the words.
  • Language translation – cannot be done by merely looking up dictionary entries.
  • Video next‑frame prediction – requires knowledge of object behavior, physics, etc.
  • Picture or video captioning – needs visual understanding and linguistic expression.

A sufficiently large neural network, given sufficient training data, will eventually solve almost any problem you give it. So if you give it an AI‑complete problem—one that can only be solved by understanding the world—it will (eventually) understand the world.

Is its understanding perfect?
Of course not. But then, that’s true of humans as well.
Source


So That’s It for AI Research…

Haha, just kidding! LLMs are general intelligence and already smarter than humans in some ways.

  • Limitations – “Training” and “inference” are completely separate processes. The LLMs we use every day do not learn from experience, except insofar as that experience can be crammed into the context window.

  • Continual learning – Adding continual learning will allow models to improve at tasks with practice, just like we do. However, many open questions remain:

    1. How do we avoid forgetting too much old knowledge while acquiring new knowledge?
    2. How do we keep the AI’s personality stable over time?
    3. How can we learn new stuff quickly enough to matter?
  • Neural networks vs. classical AI – There really are things neural networks are terrible at, and—ironically—those are exactly the things most AI research has focused on for the last 80 years: game playing, deep search, complex logical reasoning, optimization, etc. Classical (GOFAI or “Good Old‑Fashioned AI”) algorithms often perform these tasks better and faster. Future progress will need to integrate both approaches to get the best of both worlds.


Outlook

This is just a random sampling; there are many, many open research directions. We’ve cracked the code of intelligence—we now know what intelligence is and how to produce it in a machine. But that’s only the tip of the iceberg. It is the beginning of wisdom, not the end.

The next 5–10 years are going to be very exciting. Hold on tight!

0 views
Back to Blog

Related posts

Read more »

The Training Example Lie Bracket

Training Examples are Vector Fields If we are training a neural network with parameters theta in Theta = mathbb{R}^{text{num params}}, then we can treat each t...