Gesture-recognition interfaces for cell phones are closer to reality with technology from the University of Tokyo that lets you operate your phone or mobile device without laying a finger on it.
Researchers at the Ishikawa Komuro Laboratory have created a “vision-based input interface for mobile devices” through which users can type words by pointing in the air. There’s no dialing demo in the video below, though that would presumably be just as simple.
Unlike gestural interfaces such as MIT’s SixthSense, the system does not require special colored finger markings to track gestures.
A single high-speed camera running at 154 frames per second embedded in the device follows finger positions in 3D, while a frequency filter isolates “clicking” gestures.
By clicking on a virtual keyboard displayed on the screen, users can type. This would seem a lot slower than typing on a cell phone keyboard, but faster than old-fashioned letter selection on a keypad.
Another application, seen in the video, is 3D fingertip painting. Midair finger movements are translated into lines projected on a 3D space.
The system currently can only track one finger, but five is theoretically possible, according to a report by the Nikkan Kogyo newspaper.
Japanese companies including Toshiba and Pioneer have also been developing gesture-recognition interfaces. At the Ceatec 2009 gadget show, Hitachi demonstrated a prototype Gesture Remote Control that lets users change the settings on a TV screen by waving at it.