A group of French Kinect hackers/developers (when you have the blessing of the company, is it really hacking?) proved that the Kinect is capable of understanding much more than your barked commands to pause Netflix: It can also understand sign language.
So far, the hack is pretty limited–we’re kind of doubtful that the Kinect’s relatively low-resolution cameras can pick up the high level of detail that regular-speed American Sign Language entails, but at least we’ve now got evidence that it can understand some of the more broad gestures. The hack in its current form can only recognize “hello” and “sorry,” both of which are made up of exaggerated, highly visible movements, but it seems to do so flawlessly.
Plus, the developers (warning: French) expect to expand the Kinect’s vocabulary very rapidly; the framework is all in place, so all they have to do is input the gestures and their corresponding meanings. But it also means that this particular system can be used for all kinds of other things: Basically, it’s a simple translator of motions into meanings, so theoretically it could be used for all kinds of other commands. In the meantime, it could be a useful tool for teaching sign language, forcing the student to master each command enough for the Kinect to understand it.