For this year’s Apple Watches, the company surfaced a fingertip double-tap gesture derived from accessibility features introduced years before. It’s had me tapping my fingers a lot. Taking walks. On the train. This was before the feature was even made available. I just found myself doing little tap gestures, imagining what I could suddenly activate.
Apple’s double-tap gesture is here now, and I’ve been trying it for over a week. Sometimes it’s fantastic. Sometimes it feels annoyingly limited. But it’s made me want more. A lot more. After I’d been dreaming for years of future interfaces on wrists and on AR/VR headsets, this little double tap feels like the tiniest little entry point to a lot more. Almost as if Apple is easing us into a whole new interface language, step by tiny step.
Next year, Apple has a far more ambitious product it’s releasing: the Vision Pro, a combination AR/VR headset that folds all of iOS into a mixed-reality interface. It’s leaning entirely on eye and hand tracking to control everything, and double-tapping is one of the key gestures it uses to “click” on things.Â
Is the Apple Watch double-tap as it currently stands a true doorway into a new gestural interface future? Not now. It’s too laggy in the current iteration, and too limited. But it’s a move I expect to expand, improve and carry over to other wearables made by other companies, too.
A foot in the door on new ideas
Apple is quick to point out and clarify that double-tapping on Apple Watch isn’t the same as double-tapping on the Vision Pro. It works via different technologies: the watch uses optical heart rate and accelerometer/gyroscope measurements, while the Vision Pro uses external cameras to sense hand motion. The watch can’t even be used to control the Vision Pro — yet. But it’s no accident that these gestures resemble each other.Â
Companies like Meta have already outlined a future where wrist trackers and headsets intertwine. In Meta’s vision,…
Read the full article here