Imagine a gesture-based mobile device with no screen, no keyboard, and no other peripheral inputs or outputs, a mobile device that’s not really a device at all. Can you see it in your mind’s eye? If so, you’re probably picturing something akin to a new “imaginary” interface envisioned by a German research student who wants to let users imagine their own graphical interfaces, operating their conjured keyboards via spatial memory and proprioception.
The imaginary interface aims to solve two problems faced by existing mobile devices. First, currently users are required to use a touchpad, mouse, or buttons to navigate preconfigured options on a screen. In other words, we learn the devices, rather than the other way around. Secondly, these devices are limited by their physical nature; buttons and screens can only become so small before they become unusable.
Other gesture-based interfaces have attempted to circumvent these problems to some degree, but they still rely on a screen or some kind of projection device to create a visual interface that a user must learn and interact with. Sean Gustafson of Potsdam University’s Hasso Plattner Institute wants to completely untether the mind from the device.
The only physical device needed is a two-inch square device that attaches to clothing on a user’s chest (eventually Gustafson and company would like to shrink it to the size of a button). A ring of LEDs projects infrared light into the space immediately in front of the user so a camera in the device can interpret gestures. The user can then imagine his or her own interface; for instance, by making an “L” shape with the non-dominant hand the user creates an imaginary plane in which the interface will exist. From there, the device has to do some learning, but the user can place buttons — say, a “send” button for messages or an area for making simple sketches with the hands — wherever he or she wants in space.
It seems difficult within the current paradigm of visual interfaces, but Gustafson believes that if a user places elements in his or her own interface, that person will remember that placement both visually and proprioceptually (call it muscle memory mixed with the spatial perception of the limbs). Of course, if a user truly forgets the location of something within the interface, it can simply be redrawn.
The system still needs some tweaking of course — the infrared doesn’t work well in bright light, and the it’s ability to perceive highly precise gestures is less than perfect — but the idea is intriguing. The mobile interfaces of the future might be more than mobile in the it-fits-in-your-pocket sense; anywhere your imagination goes, your own custom mobile interface goes as well.