Apple trained an AI to recognize hand gestures from sensor data
By Marcus Mendes
Published on March 11, 2026.
Apple has published a study in its Machine Learning Research blog, EMBridge: Enhancing Gesture Generalization from EMG Signals through Cross-Modal Representation Learning. The researchers developed a cross-modal representation learning framework to bridge the gap between EMG muscle signals and structured hand pose data. They used two datasets: EMBridge, which could pave the way for a future Apple Watch model to control devices such as Apple Vision Pro, Macs, iPhones, and other wearables. However, limitations on the model's use of both EMG signals and synchronized hand poses could limit its training.
Read Original Article