Apple trained an AI to recognize previously unseen hand gestures from wearable sensors
Source: 9to5Mac

Apple AI Hand‑Gesture Study
Apple recently trained an AI model to recognize hand gestures that weren’t part of its original training dataset. Key points of the study:
- Objective: Extend the model’s ability to understand new, unseen gestures.
- Approach: Introduced a supplemental set of gesture data and fine‑tuned the existing model using transfer‑learning techniques.
- Results: The updated model achieved a significant accuracy boost on the novel gestures while maintaining performance on the original set.
- Implications: Demonstrates that Apple’s AI can adapt to user‑defined inputs, paving the way for more flexible and personalized interactions across its devices.
Source: 9to5Mac – “Apple teaches AI new hand gestures” (July 2025).
What Is EMG?
Apple published a new study on its Machine Learning Research blog, EMBridge: Enhancing Gesture Generalization from EMG Signals through Cross‑Modal Representation Learning. The work will be presented at the ICLR 2026 Conference in April.
Quick Primer: EMG
- EMG (Electromyography) measures the electrical activity generated by muscles during contraction.
- Applications range from medical diagnosis and physical therapy to prosthetic‑limb control, wearables, and AR/VR interaction.
Example: Meta’s Ray‑Ban Display glasses use a “Neural Band” (a wrist‑worn device) that “interprets your muscle signals to navigate Meta Ray‑Ban Display’s features.”
Datasets Used in the Study
| Dataset | Description | Key Stats |
|---|---|---|
| emg2pose | Large‑scale open‑source EMG dataset with synchronized hand‑pose data. | • 370 h of sEMG • 193 consenting users • 29 behavioral groups (discrete & continuous motions) • >80 M pose labels • 4 recording sessions per gesture category (different EMG‑band placements) • 2‑second non‑overlapping windows as input |
| NinaPro DB2 | Paired EMG‑pose data for pre‑training. | • 40 subjects • 49 hand gestures (finger flexions, functional grasps, combined movements) • 12 forearm electrodes, 2 kHz sampling • Hand kinematics captured by a data glove |
| NinaPro DB7 (used for downstream classification) | Same device & gesture set as DB2, but collected from 20 non‑amputated subjects. | • Evaluation set for gesture classification |
Processing details (emg2pose): EMG is instance‑normalized, band‑pass filtered (2–250 Hz), and notch‑filtered at 60 Hz.
Why EMBridge Matters
The study suggests that EMBridge could enable wrist‑worn devices (e.g., future Apple Watch models or smart glasses) to:
- Continuously infer hand gestures from EMG signals.
- Drive virtual avatars in VR/AR.
- Control prosthetic or robotic hands.
“A potential practical application of our framework is wearable Human‑Computer Interaction. In scenarios like VR/AR and prosthetic control applications, a wrist‑worn device must continuously infer hand gestures from EMG to drive a virtual avatar or robotic hand.” – Apple research team
In practice, this could open up new interaction methods, improve accessibility, and broaden the ecosystem of Apple’s wearables (Apple Vision Pro, Macs, iPhones, and rumored smart glasses).
The paper itself does not reference any specific upcoming Apple products, but the technology clearly paves the way for richer, EMG‑driven user experiences across Apple’s hardware lineup.
What is EMBridge?
EMBridge bridges the gap between real EMG muscle signals and structured hand‑pose data. The training pipeline consists of:
- Pre‑training on EMG and hand‑pose data separately.
- Alignment of the two representations so the EMG encoder can learn from the pose encoder.
- Masked pose reconstruction, where parts of the pose data are hidden and the model must reconstruct them using only EMG information.

Soft‑Target Training
To reduce errors caused by similar gestures being treated as negatives, the authors introduced soft targets, which:
- Structure the representation space
- Improve generalisation to unseen gestures

Evaluation
EMBridge was evaluated on two benchmarks, emg2pose and NinaPro, and consistently outperformed existing methods—especially in zero‑shot (never‑before‑seen) gesture recognition—while using only 40 % of the training data.

Limitations
- The model relies on datasets that contain both EMG signals and synchronized hand‑pose data, which can be difficult and costly to collect.
- Despite this, the study is noteworthy at a time when EMG‑based device control is gaining traction.
For the full technical details—including the Q‑Former, MPRL, and CASCLe components—read the original paper.
Worth Checking Out on Amazon
- David Pogue’s Apple: The First 50 Years
- Logitech MX Master 4
- AirPods Pro 3
- AirTag (2nd Generation) – 4 Pack
- Apple Watch Series 11
- Wireless CarPlay Adapter


FTC: We use income‑earning auto‑affiliate links. More info