The promise of Microsoft's Kinect was never simply to allow us to play games sans peripherals, but that one day an entirely new peripheral-free language would arise between us and our machines (many writers might pause here to mention the film Minority Report, but we're going to refrain). We're not all the way there yet, but a San Francisco startup is making a sub-$100 attempt at throwing open the door.
WIth the Kinect, Microsoft opened up the world of gestural controls to the masses, allowing users to manipulate video games and otherwise control their devices with simple motion controls. Now Microsoft Research is doing it again, this time using inaudible sound waves to create the same kind of gestural interface, no cameras necessary.
Back in 2009, we wrote about a little robotic dashboard companion called AIDA (for Affective Intelligent Driving Agent), an MIT creation that essentially read a driver's facial expressions to gauge mood and inferred route and destination preferences through social interaction with the driver.