Apple Patents Technology for AR Touch Detection Using ML and Depth-Mapping Cameras
As the iPad and iPhone have mostly demonstrated to a great extent, much of Apple’s hardware nowadays relies heavily on accurate detection of direct touch inputs, such as a finger resting against a screen, or on a trackpad, in Mac’s case. However, as people across the globe depend on augmented reality for both entertainment and work, they require interacting with digital objects that aren’t equipped with physical touch sensors. Quite recently, Apple has patented a prime technique for detecting touch using Machine Learning (ML) and depth-mapping cameras. As per patent standards, Apple’s technology in the depth-based touch detection system is pretty straightforward – the external cameras work together in a live or real-time environment for creating a three-dimensional depth map by measuring the distance of an object, for instance – a finger, from a touchable surface and then further determining when the object touches the surface. Crucially, the distance measured is designed in a way so that it becomes usable even when the cameras change their position, depending partly on training from an ML-based model to discern touch inputs. Illustrations of the technique corresponding to Apple’s patented technology demonstrate three external cameras working together for determining the relative position of a finger, which is a concept that might turn out to be