This Tool Wants Cameras To Take Memories Not Images

Saving only the important details

Security Camera In Red

Security Camera In Red

RedEye is an image processing architecture that doesn't record pictures, it records descriptions of what it sees.PeacockArmageddon, via Flickr CC BY-ND 2.0

There is too much in the world to see. Digital cameras capture all that is within their field of vision, storing richly detailed images in large files. But for many purposes, they don’t have to. When we’re driving, we don’t need to know the license plates of all the cars around us, just their general shapes and positions. RedEye, a new image processing architecture, wants cameras to process images first, and then store only the relevant details. The goal: reducing the amount of data it takes to store images, so a wearable computer could watch what a human watches, continuously.

RedEye is in development by a team of researchers at Rice University's Department of Electrical and Computer Engineering. Here's how the school describes the process:

“Conventional systems extract an entire image through the analog-to-digital converter and conduct image processing on the digital file,” he said. “If you can shift that processing into the analog domain, then you will have a much smaller data bandwidth that you need to ship through that ADC bottleneck.” LiKamWa said convolutional neural networks are the state-of-the-art way to perform object recognition, and the combination of these techniques with analog-domain processing presents some unique privacy advantages for RedEye.

Essentially, the system doesn’t need to take a picture of a dog, but it needs to see and remember that it saw a dog, and be able to tell that dog apart from a cat. If the signal can be processed as it's received, then RedEye can store the relevant information, without worrying about remembering any other details of the interaction.

Does it work? From the paper:

We investigate the utility of RedEye in performing analog convolutional processing for continuous mobile vision. We find that RedEye reduces sensing energy consumption by 84.5%. The reduction primarily comes from readout workload reduction. RedEye also assists mobile CPU/GPU systems by replacing the image sensor, nearly halving the system energy consumption by moving convolutional processing from the digital domain in the analog domain.

The work is still in progress, but that progress points towards a wearable machine that records images not as images, but as memories observed in place. This is one step closer to a machine remembering where we left our keys. To truly become human, though, it will need to forget where they went.