There is too much in the world to see. Digital cameras capture all that is within their field of vision, storing richly detailed images in large files. But for many purposes, they don’t have to. When we’re driving, we don’t need to know the license plates of all the cars around us, just their general shapes and positions. RedEye, a new image processing architecture, wants cameras to process images first, and then store only the relevant details. The goal: reducing the amount of data it takes to store images, so a wearable computer could watch what a human watches, continuously.

RedEye is in development by a team of researchers at Rice University’s Department of Electrical and Computer Engineering. Here’s how the school describes the process:

Essentially, the system doesn’t need to take a picture of a dog, but it needs to see and remember that it saw a dog, and be able to tell that dog apart from a cat. If the signal can be processed as it’s received, then RedEye can store the relevant information, without worrying about remembering any other details of the interaction.

Does it work? From the paper:

The work is still in progress, but that progress points towards a wearable machine that records images not as images, but as memories observed in place. This is one step closer to a machine remembering where we left our keys. To truly become human, though, it will need to forget where they went.