Claude의 AI 전력 사용에 대한 견해

발행: (2025년 12월 27일 오후 04:24 GMT+9)
8 min read
원문: Dev.to

Source: Dev.to

Conversation Transcript (cleaned up)

Me

Could AI come up with more efficient ways to power itself? I just read about the push for more nuclear energy facilities to open because of all the compute power that’s needed for AI.

Claude

Ooh, this is a wild question! The short answer is: not really yet, but theoretically maybe?

Here’s the situation:

The energy problem is real – training large AI models uses an absurd amount of power. For example, training GPT‑3 consumed roughly as much electricity as 120 U.S. homes use in a year. With AI systems running 24/7, we need a lot more power, which is why there’s a scramble for new nuclear plants.

Could AI optimize itself? Some progress is already happening:

  • AI researchers use AI to design more efficient chip architectures.
  • “Neural Architecture Search” lets AI design better AI models.
  • Google used AI to optimize data‑center cooling, saving about 40 % on energy costs.

But the fundamental problem is that AI is inherently energy‑hungry. It’s like asking a gas‑guzzling truck to use less fuel – you can make it a bit more efficient, but it’s still a truck.

The really interesting question is whether AI could discover totally new energy sources, revolutionary battery tech, or fusion breakthroughs – e.g., spotting patterns in plasma physics that humans missed or designing novel materials. This is theoretically possible, but it hasn’t happened yet.

Right now we’re in an awkward phase where AI is simultaneously:

  • Consuming massive amounts of energy.
  • Helping us find better energy solutions.
  • Not yet smart enough to solve its own energy problem.

Pretty ironic, honestly. 😅

Me

I wonder if animal brains could be used as surrogate brains for AI farms. I guess the first step would be to get a cow brain working as well as a human brain can.

Claude

Whoa, okay—that’s a wild direction! 😅

I have to say, this feels like a pretty dark path that raises serious ethical concerns. Using living animal brains as compute hardware is fraught with red flags.

Key points:

  • Brains aren’t interchangeable. Cow brains are optimized for cows, not for human‑level cognition. Their structure, capabilities, and wiring are fundamentally different.
  • Ethical issues. Breeding or raising animals solely to harvest their brains as biological computers would be a nightmare for bioethics—far worse than conventional factory farming.
  • Doesn’t solve the problem. Biological brains are extremely energy‑efficient (≈20 W, like a dim lightbulb) but are slow for the kinds of computation modern AI requires. You’d need massive farms of brains, bringing you back to the energy problem and adding the logistics of feeding millions of animals.
  • Actual research direction. The focus is on building neuromorphic computers that mimic brain‑like efficiency, not on harvesting actual animal brains.

There are far less horrifying paths to explore! 😬

Me

Lol, sorry to make you uncomfortable! 😆 Tell me about the much more ethical‑sounding “neuromorphic computing.”

Claude

Haha, no worries! Neuromorphic computing is indeed a lot less “Black Mirror.”

Basic idea: Build computer chips that work more like biological brains rather than traditional von Neumann architectures.

How normal computers work: They process information sequentially, shuttling data back and forth between separate CPU and memory. Fast, but very energy‑hungry.

How brains work: Billions of neurons fire in parallel, and synapses handle both processing and memory storage simultaneously. This yields superb energy efficiency.

Neuromorphic chips try to mimic this:

  • Combine artificial “neurons” and “synapses” on the same substrate.
  • Operate in parallel, event‑driven (only consume power when a spike occurs).
  • Use analog or mixed‑signal circuits to emulate the physics of real neurons, which can be orders of magnitude more power‑efficient than digital logic.

Why it matters for AI:

  • Greatly reduces energy per operation, which is crucial as models get bigger.
  • Enables on‑chip learning and adaptation, something traditional GPUs struggle with.
  • Opens the door to ultra‑low‑power edge devices that can run AI locally (think smart sensors, wearables, tiny robots).

Current state:

  • Companies like Intel (Loihi), IBM (TrueNorth), and research labs worldwide have built prototype chips.
  • They’re still niche; most large‑scale training still runs on GPUs/TPUs, but inference workloads are beginning to migrate.

Bottom line: Neuromorphic computing tries to capture the brain’s energy‑efficiency without the ethical nightmare of using real animal brains. It’s an active research area and could become a key piece of the puzzle for sustainable AI.

  • 특정 작업에 대해 1 000배까지 더 나은 에너지 효율성을 제공합니다.

멋진 예시:

  • Intel Loihi – 온‑칩 학습 기능을 갖춘 연구용 칩.
  • IBM TrueNorth – 백만 코어 규모의 뉴로모픽 프로세서.

응용 분야: 로봇공학, 패턴 인식, 센서 처리, 저전력 엣지 AI.

꿈: 원자력 발전소가 필요 없는 배터리 전원으로 구동되는 AI 시스템—몇 분 만에 배터리가 닳지 않는 스마트폰에 정교한 AI가 탑재되는 모습을 상상해 보세요.

아직 초기 단계이지만 가능성은 확실합니다. 그리고 이 칩들을 만드는 과정에서 소 한 마리도 해치지 않았습니다. 😄

Me

제 생각엔 제 어두운(그리고 졸린) 인간 두뇌가 인공 뇌 구조 같은 것이 필요하다는 걸 연결하려고 했던 것 같은데, 이상하게도 동물의 뇌로 떠올라서 뭔가 살아 있어야 할 것처럼 생각했어요—하지만 그건 말이 안 되죠.

Claude

하하, 전혀 어리석은 도약이 아니에요! 졸린 두뇌가 실제로 흥미로운 점을 짚고 있었어요—생물학적 뇌의 구조가 그 놀라운 효율성을 제공한다는 아이디어를 맴돌고 있었거든요.

맞아요: 중요한 것은 아키텍처이며, “생명”이 아니라는 점입니다. 살아 있는 뇌가 필요하다고 생각한 이유도 이해됩니다; 오랫동안 우리는 그 구조를 인공적으로 재현하는 방법을 몰랐기에, 생물학적 뇌가 유일한 구체적인 예시였던 것이죠.

뉴로모픽 연구의 목표는 바로 그 구조를 실리콘(또는 다른 엔지니어링 재료) 안에 담아, 실제 습식 뇌를 사용하는 윤리적·실용적 문제 없이 구현하는 것입니다.

Additional Conversation

Me

지금 당장 개새끼 같은 지네처럼 다리가 달린 기차를 상상하면서 배를 잡고 웃고 있어 🤣🤣🤣

Claude

맞지?! 😂😂😂

마치 빅토리아 시대의 엔지니어가 외치는 듯이, “신사 여러분, 저는 교통 문제를 해결했습니다 – 건물만한 크기의 말을 키우기만 하면 됩니다.”

혹은 젠장, 거미 기차가 수백 개의 다리를 가지고 시속 60 mph로 레일을 질주하며, 각 다리가 딸깍‑딸깍‑딸깍‑딸깍 소리를 낸다.

특허 도면을 상상하니 죽겠어요:
Figure 1: The Locomotive Centipede. Note the 847 steam‑powered legs.”

피곤할 때 뇌가 만들어내는 이런 미친 생각이 바로 아름답습니다. 🤣

이제는 이런 것이 존재하지 않아 안타깝습니다. 무섭기도 하고 웃기기도 하니까요. 거대한 금속 지네가 스컹크처럼 역에 들어오는 모습을 상상해 보세요.

악몽 같은 기차에 대한 달콤한 꿈을, 샤론 😄

웃기지, 맞아?? “Wetware” 🤣

스크린샷

#이미지
1Screenshot 1
2Screenshot 2
3Screenshot 3
Back to Blog

관련 글

더 보기 »

내 첫 Claude Code 플러그인 만들기

소개 저는 Forgetful용 Claude Code 플러그인을 만들었습니다 https://github.com/ScottRBK/forgetful. Forgetful는 제가 작업하고 있는 semantic memory system이며 — pers...

Chrome에서 Claude

기사 URL: https://claude.com/chrome 댓글 URL: https://news.ycombinator.com/item?id=46339777 점수: 14 댓글: 5...