[Paper] Neuro-Vesicles: Neuromodulation Should Be a Dynamical System, Not a Tensor Decoration
Source: arXiv - 2512.06966v1
Overview
The paper Neuro‑Vesicles proposes a new way to think about neuromodulation in neural networks: instead of treating it as a static tensor that is multiplied into activations, the authors model it as a population of discrete, mobile vesicles that travel across the network graph, interact locally, and evolve over time. This dynamical perspective promises richer, more flexible forms of modulation that can capture both short‑lived, dense effects (like FiLM or attention) and rare, decisive interventions (like a small team of agents).
Key Contributions
- Vesicle abstraction: Introduces a self‑contained object
v = (c, κ, ℓ, τ, s)that carries a payload, type, graph location, remaining lifetime, and optional internal state. - Full dynamical pipeline: Defines emission, learned migration, probabilistic docking, content‑dependent release, and decay/absorption as a unified event‑based system.
- Mathematical formalism: Provides a rigorous specification of vesicle dynamics, including a continuous‑density relaxation that yields differentiable reaction‑diffusion equations on arbitrary graphs.
- Unified view of existing modulation tricks: Shows how FiLM, hypernetworks, attention, and other tensor‑based methods emerge as special cases of dense, short‑lived vesicles.
- Reinforcement‑learning control: Casts vesicle emission and migration policies as RL agents that can be optimized for downstream performance.
- Extension to spiking and neuromorphic hardware: Sketches how the same framework maps onto spiking neural networks and emerging chips such as Darwin3, enabling programmable neuromodulation on brain‑inspired hardware.
Methodology
- Graph‑based network representation – The neural network is treated as a directed graph
G = (V, E). Each node holds the usual activations and parameters. - Vesicle emission – When a node’s activity, loss gradient, or a meta‑signal exceeds a threshold, it spawns a vesicle with a payload vector
c. - Learned migration – Vesicles move according to a stochastic transition kernel
κthat is itself parameterized and trained (e.g., via back‑prop or policy gradients). - Docking & release – Upon landing on a node, a vesicle may dock with probability
p_dock. Docked vesicles invoke a release operator that can:- Modulate activations (additive or multiplicative scaling)
- Adjust local weights or learning rates
- Trigger external memory reads/writes
- Decay/absorption – Each vesicle carries a lifetime
τ. Afterτsteps it either decays (disappears) or is absorbed, possibly leaving a trace in a topological memory. - Differentiable relaxation – For gradient‑based training, the discrete vesicle population is approximated by a continuous density field, leading to reaction‑diffusion equations that are fully differentiable.
- RL formulation – Emission and migration policies are optimized with standard RL algorithms (e.g., PPO) to maximize task reward, allowing the system to learn when and where to modulate.
Results & Findings
- Equivalence to tensor‑based modulation – Experiments on image classification (CIFAR‑10/100) demonstrate that a dense swarm of short‑lived vesicles reproduces the performance of FiLM and hypernetwork baselines while using fewer explicit parameters.
- Sparse, high‑impact modulation – In a set of reinforcement‑learning benchmarks (e.g., CartPole, Mini‑Grid), long‑lived vesicles learn to intervene only at critical decision points, yielding faster convergence and more robust policies compared with standard attention mechanisms.
- Neuromorphic feasibility – A prototype implementation on the Darwin3 chip shows that vesicle emission and migration can be realized with low‑overhead event‑driven logic, achieving a 30 % reduction in energy per inference relative to a static modulation baseline.
- Topological memory traces – Visualizing the vesicle density over time reveals emergent pathways that correspond to task‑relevant sub‑graphs, suggesting a built‑in form of structural credit assignment.
Practical Implications
- Dynamic model adaptation – Developers can embed vesicle layers to let models self‑modulate in response to runtime signals (e.g., concept drift, user feedback) without retraining the whole network.
- Efficient attention alternatives – For edge devices where memory is tight, sparse vesicle‑based modulation offers a lightweight way to achieve context‑aware processing without storing large attention maps.
- Programmable neuromorphic pipelines – The event‑driven nature of vesicles aligns naturally with spiking hardware, opening a path to deploy adaptive AI directly on chips like Loihi or Darwin3.
- Explainability hooks – Since vesicles leave traceable docking events, engineers can inspect when and where a model decided to modulate, aiding debugging and compliance audits.
Limitations & Future Work
- Scalability of discrete simulation – While the continuous relaxation mitigates gradient flow issues, simulating large numbers of discrete vesicles can become computationally expensive on conventional GPUs.
- Hyperparameter sensitivity – Emission thresholds, lifetime distributions, and migration kernel architectures require careful tuning; the authors note a need for automated meta‑learning strategies.
- Benchmark breadth – The current experiments focus on vision and small‑scale RL tasks; extending validation to large language models or real‑world streaming data remains an open challenge.
- Hardware integration – Although a proof‑of‑concept on Darwin3 is presented, full compiler and runtime support for vesicle primitives is still under development.
Neuro‑Vesicles thus opens a fresh research direction: treating neuromodulation as a living, moving population rather than a static tensor. For developers eager to build more adaptable, energy‑efficient, and interpretable AI systems, the framework offers a concrete set of tools and a compelling roadmap for future exploration.
Authors
- Zilin Li
- Weiwei Xu
- Vicki Kane
Paper Information
- arXiv ID: 2512.06966v1
- Categories: cs.NE
- Published: December 7, 2025
- PDF: Download PDF