[Paper] Rethinking Intelligence: Brain-like Neuron Network

Published: (January 27, 2026 at 06:52 AM EST)
2 min read
Source: arXiv

Source: arXiv - 2601.19508v1

Overview

Since their inception, artificial neural networks have relied on manually designed architectures and inductive biases to better adapt to data and tasks. With the rise of deep learning and the expansion of parameter spaces, they have begun to exhibit brain‑like functional behaviors. Nevertheless, artificial neural networks remain fundamentally different from biological neural systems in structural organization, learning mechanisms, and evolutionary pathways. From the perspective of neuroscience, we rethink the formation and evolution of intelligence and propose a new neural network paradigm, Brain‑like Neural Network (BNN).

We further present the first instantiation of a BNN termed LuminaNet that operates without convolutions or self‑attention and is capable of autonomously modifying its architecture. Extensive experiments demonstrate that LuminaNet can achieve self‑evolution through dynamic architectural changes.

  • CIFAR‑10: LuminaNet achieves top‑1 accuracy improvements of 11.19 % over LeNet‑5 and 5.46 % over AlexNet, outperforming MLP‑Mixer, ResMLP, and DeiT‑Tiny among MLP/ViT architectures.
  • TinyStories (text generation): LuminaNet attains a perplexity of 8.4, comparable to a single‑layer GPT‑2‑style Transformer, while reducing computational cost by ~25 % and peak memory usage by nearly 50 %.

Code and interactive structures are available at https://github.com/aaroncomo/LuminaNet.

Key Contributions

  • Introduction of the Brain‑like Neural Network (BNN) paradigm.
  • Development of LuminaNet, a convolution‑ and self‑attention‑free architecture capable of self‑modifying its structure.
  • Empirical validation on image classification (CIFAR‑10) and text generation (TinyStories) tasks, showing competitive performance with reduced resource consumption.
  • Open‑source release of code and interactive models.

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.NE (Neural and Evolutionary Computing).

Authors

  • Weifeng Liu

Paper Information

  • arXiv ID: 2601.19508v1
  • Categories: cs.NE
  • Published: January 27, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »