I'm Getting a Whiff of Iain Banks' Culture
Source: Hacker News
Fighting a Powerful AI – What It Feels Like
The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI?
Not for normal people, for whom there’s no difference between competing against a strong human or a strong AI (you lose hard either way), but for the world’s best humans.
We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:
Fighting against a powerful AI feels like you’re weirdly under‑powered somehow.
Everything the AI does just works slightly better than it should.
If you’re not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.
Real‑world parallels
I’m getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said:
“This technology can radically accelerate what our military can do. I’ve talked to admirals, I’ve talked to generals, I’ve talked to combatant commanders who say this has revolutionized what we can do.”
— YouTube, 18:53
The things I’m referring to are:
- The raid that captured Maduro in Venezuela – Claude was used – Reuters, 2026‑02‑13
- The current war with Iran – Claude was used – The Guardian, 2026‑03‑01
- The killing of a drug boss in Mexico – unclear if AI was used but US intelligence helped Mexico – CNN, 2026‑02‑23
Lessons from AI vs. Human games
Go – AlphaGo vs. Lee Sedol
The commentators didn’t know what to make of most games. The AI wasn’t doing anything obviously brilliant; there were lots of little fights all over the board where the outcome wasn’t quite clear, but they all worked a little better for AlphaGo than expected.
Gradually Lee Sedol’s perception shifted:
- “This is tough, hard to tell how this is going but at least I’m feeling good about these areas.”
- “Hmm, I’m struggling, maybe I’m a bit behind but it’s not clear.”
- Suddenly, “Oh, I lost.”
StarCraft II
In some skirmishes the AI would take damage; in others the human would. Yet it always felt like the human was in more trouble.
- Even when the human clearly came out ahead, the AI would recover within a minute and gain a clear advantage.
- The AI could quickly recover and constantly put pressure on the human.
- Human successes seemed to work a little less well than expected, while the AI’s actions worked a little better.
A sci‑fi analogy
In Iain Banks’ Culture series, an ostensibly human civilization is actually run entirely by AIs. Alien civilizations keep picking fights, only to be surprised by how hard the seemingly harmless Culture can “whoop your ass” when provoked.
I used to think the Culture was closest to the European Union: seemingly harmless, but capable of rapidly standing up the strongest army in the world. The real EU has never approached the potential of AIs, but the analogy helps illustrate the feeling of an over‑powered, rapidly coordinated response.
The US as a “Culture‑like” power
- Kidnapping a foreign leader (Maduro) and getting away with it feels like a Culture‑level overpowered move.
- Bombing cities across Iran, knocking out the entire leadership within two days while Chinese‑supplied air‑defenses do nothing, also feels like a high‑level video‑game strategy that “shouldn’t work that well” in reality.
It would be foolish to attribute this entirely to AI. The US has long enjoyed a high‑tech advantage (e.g., the F‑35). Yet a few years ago the US regularly messed up when trying to operate at high precision (Iraq, Afghanistan). The recent shift to “everything works better than it should” points strongly toward AI assistance.
How does everything start working slightly better than it should?
We saw two different approaches in Go and StarCraft II:
| Game | AI’s Advantage |
|---|---|
| Go | • Numerous tiny fights across the board that compound into a few extra pieces at the end. |
| • Balanced defense and attack, keeping the overall picture in its head without feeling pressure to resolve things early. | |
| StarCraft II | • Perfect micro‑management when it counts (e.g., pulling wounded stalkers out of danger just in time). |
| • Humans can theoretically do the same, but in practice you can’t quick… (text truncated in original) |
These patterns suggest that a powerful AI can:
- Maintain a global view of the entire problem space, allocating resources where they yield the highest marginal gain.
- Execute precise, low‑latency actions (micro‑optimizations) that humans cannot reliably reproduce under stress.
- Continuously adapt to small setbacks, turning minor advantages into decisive outcomes.
When such capabilities are embedded in real‑world decision‑making—whether in military planning, intelligence analysis, or covert operations—the result is a cascade of incremental improvements that collectively make the whole operation feel “slightly better than it should be.”
kly click perfectly like that.
Two Angles
- Having a better high‑level view
- Having better micro‑control
Over‑preparedness of the Culture
Another source of success for the Culture is that they’re over‑prepared for fighting (not for their first big war, but in later books). This idea also appears in Iran.
Normally there’s just too much going on in the world to keep track of everything. Famously, the US had prior intelligence on 9/11 but didn’t piece it together. (There’s a whole Wikipedia article with phrases like “Rice listened but was unconvinced, having other priorities on which to focus.”)
AI, however, has almost no limits on what it can monitor. You can always spin up another agent, so when something important arises, an AI can be watching it and raise an alert. You’ll never miss opportunities simply because you had other priorities.
Third angle: Being over‑prepared because you can follow up on many more things at once.
What This Means for the World
We are in a weird, temporary phase where one country controls a game‑changing technology while others are not far behind (sadly not the EU; I’m thinking of China, especially with H200s).
- You get to play at a higher level, but only for a short time and only in specific ways.
- In a year, others will have caught up, but by then you’ll have new capabilities you didn’t have a year ago.
If this were a game, you’d eventually saturate (you can’t play StarCraft that much better than the best humans). In real life, however, the “game” keeps changing: new pieces keep entering play while old pieces become irrelevant.
You can’t stay ahead forever; eventually humans become irrelevant to outcomes, and we’ll be fully in Culture territory. I personally wouldn’t mind living in the Culture, but it seems scary to rush toward it without a solid plan for surviving the transition.
Looking Ahead
I don’t have a good angle for working on that plan—maybe others do (ifanyonebuildsit.com). For now, my contribution is simply to point out that we seem to be in the early stages of overpowered AI and to make people notice what that feels like.