[Paper] ALIGN: Adversarial Learning for Generalizable Speech Neuroprosthesis

Published: (March 18, 2026 at 05:38 PM EDT)
2 min read
Source: arXiv

Source: arXiv - 2603.18299v1

Overview

Intracortical brain-computer interfaces (BCIs) can decode speech from neural activity with high accuracy when trained on data pooled across recording sessions. In realistic deployment, however, models must generalize to new sessions without labeled data, and performance often degrades due to cross-session nonstationarities (e.g., electrode shifts, neural turnover, and changes in user strategy). In this paper, we propose ALIGN, a session-invariant learning framework based on multi-domain adversarial neural networks for semi-supervised cross-session adaptation. ALIGN trains a feature encoder jointly with a phoneme classifier and a domain classifier operating on the latent representation. Through adversarial optimization, the encoder is encouraged to preserve task-relevant information while suppressing session-specific cues. We evaluate ALIGN on intracortical speech decoding and find that it generalizes consistently better to previously unseen sessions, improving both phoneme error rate and word error rate relative to baselines. These results indicate that adversarial domain alignment is an effective approach for mitigating session-level distribution shift and enabling robust longitudinal BCI decoding.

Key Contributions

This paper presents research in the following areas:

  • cs.LG
  • cs.NE
  • cs.SD

Methodology

Please refer to the full paper for detailed methodology.

Practical Implications

This research contributes to the advancement of cs.LG.

Authors

  • Zhanqi Zhang
  • Shun Li
  • Bernardo L. Sabatini
  • Mikio Aoi
  • Gal Mishne

Paper Information

  • arXiv ID: 2603.18299v1
  • Categories: cs.LG, cs.NE, cs.SD
  • Published: March 18, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »