WIth the Kinect, Microsoft opened up the world of gestural controls to the masses, allowing users to manipulate video games and otherwise control their devices with simple motion controls. Now Microsoft Research is doing it again, this time using inaudible sound waves to create the same kind of gestural interface, no cameras necessary.
"Resistive" touchscreens are the type you're most likely to use in a DIY microcontroller project. These consist of two screen layers coated with a resistive material and separated by a small gap. When touched, the layers make contact, creating a voltage divider circuit. The resulting voltage is easily measured and correlated to position. The top layer of the touchscreen is just a clear overlay, though; what really makes it work is the layer underneath.
Back in 2009, we wrote about a little robotic dashboard companion called AIDA (for Affective Intelligent Driving Agent), an MIT creation that essentially read a driver's facial expressions to gauge mood and inferred route and destination preferences through social interaction with the driver.
The next version of Samsung's blockbuster Galaxy S Android lineup, sensibly named the Galaxy S II, is shaping up to be a very impressive platform. That newest innovation we've seen from Samsung, via Android Community, is a change from the now-standard, Apple-pioneered "pinch-to-zoom" interface. Instead of pinching, you simply move the phone forwards and backwards to zoom in or out.
In a press event yesterday that required far more than its characteristic 140 characters, Twitter’s top brass – co-founder Biz Stone, CEO Evan Williams, and two of the top product development team – unveiled the new and improved Twitter. And in what looks on its face to be an attempt to lure people away from those flashy, customizable Twitter apps and back to Twitter’s Web page, the Tweeps behind Twitter have come up with some pretty cool features.
With this head gear, you could make robots go grab you a beer simply by glancing at the refrigerator.
A team of researchers at Northeastern University in Boston is working on a brain-robot interface that lets you command a robot by looking at specific regions on a computer screen. The system detects brain signals from the user's visual cortex, and commands a robot to move left, right and forward, the Boston Globe reports.
Tired of seeing 3-D renderings of objects on your screen and being unable to grab and fondle them? Just slip your fingers into the firm grip of Japanese haptics robot HIRO III. Driven by 15 independent motors, its black phalanges provide real-time force feedback to your hand, precisely simulating the weight and contour of virtual 3-D objects -- a pretty wild paradigmatic leap forward in interface technology!
Imagine a gesture-based mobile device with no screen, no keyboard, and no other peripheral inputs or outputs, a mobile device that's not really a device at all. Can you see it in your mind's eye? If so, you're probably picturing something akin to a new "imaginary" interface envisioned by a German research student who wants to let users imagine their own graphical interfaces, operating their conjured keyboards via spatial memory and proprioception.
The future of touchscreen interfaces is: you? A project between a Carnegie Mellon researcher and a couple of creative thinkers over at Microsoft Research have created Skinput, a Bluetooth-enabled device that allows you to use your own skin as a peripheral input device for devices like cell phones, MP3 players or gaming consoles.