PhD student Yale Song and his team devised a system that recognizes 24 different--and extremely precise--gestures and postures based on signals used by aircraft carrier deck crews to control robotic planes before takeoff and after landing. The camera works a bit like the Kinect; it can track different body movements and "combine" them to recognize the command.
The researchers realized that using hand gestures was the most natural way humans communicate with each other, so why not use it to communicate with technology, too?
Needless to say, we're not talking the big passenger planes here--at least not yet--and the technology itself is still in its infancy. As of right now, the system accurately recognizes gestures only 76% of the time, but Song hopes to improve accuracy--and make the system capable of learning new gestures and giving feedback (for instance, it could tell you if it does or does not understand a command).
This could be pretty useful when planes are on the ground, but ground personnel would have to be really careful that they don't pause to itch their foot or something...
[ via ]