SHARE

In Head Trip, PopSci explores the relationship between our brains, our senses, and the strange things that happen in between.

WHEN YOU LOOK AT an aerial image of a crater, what do you see? Does it look like a big bowl that’s been pressed into the ground? Or does it appear as a mound popping out at you? How you see it depends on the angle from which the light is hitting it—a small factor with vast consequences on how we’re able to perceive the world. 

The crater illusion is a phenomenon where indentations—such as craters, footprints, or even sectioned dinner plates—suddenly look like a button is popping out at you when the image is flipped horizontally or vertically, and the light source no longer seems as if it’s coming from above. Satellite photos of craters are taken from overhead, and, typically, a shadow is only cast inside the cavity when the sun’s rays are parallel to the surface. When light is oriented this way in the image, coming in from the horizon versus overhead, our perception of the scene is altered. Once we rotate the picture so that the angle of the light is above, it looks concave again. 

Why it’s hard to tell if moon craters are holes or bumps
Occator Crater and Ceres’ Brightest Spots. Image: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI

Human visual processing operates on a default “light from above” assumption because most of the brightness we encounter in the world—like the sun or the LEDs in our homes—usually comes from the sky or ceiling. Our visual systems capitalize on this perception and then use it to process the shapes and curvatures of surfaces. Even if our brains misleadingly perceive a crater as jutting outward, the optical process that facilitates this illusion is operating correctly. 

“It’s kind of like an order of operations that the visual system does,” says Jason Fischer, an assistant professor in the Department of Psychological and Brain Sciences at Johns Hopkins University. “By flipping the image upside down, but maintaining that assumption of light coming from above, it causes the reinterpretation of the shape.”

The human brain and how it processes information are curious. Our senses aim to deliver the most accurate and useful interpretation of the world possible, as they are our way of accessing that physical reality. But the confusion between convex and concave due to the angle of the light speaks to the broader computational challenges our brains face on a daily basis. In the hollow face illusion, for instance, the sunken side of a human mask looks convex at first glance due to the real-world premise that visages protrude. 

Another is the moon illusion, where the pearly orb seems massive when it’s orbiting closer to Earth’s horizon even though its width hasn’t changed. There isn’t a definitive answer for why our brains perceive the moon this way. But some scientists believe we’re relying on the same mechanism illustrated by the Ponzo illusion, where a set of converging parallel lines muddy our perception of two objects that are the same width. In this illusion, our brains read the consolidating lines as if they are disappearing into the distance; therefore, an object sitting farther away on those lines appears wider. Because our brains are hardwired to interpret objects this way, trees and buildings are thought to serve as the converging lines in the case of the moon. That is, in part, how we ended up with the term “supermoon”: “It looks farther away when it’s closer to the horizon, and therefore you interpret it as being larger,” Fischer explains. 

Whether looking at the moon from Earth or through NASA’s probes and telescopes, we’re rapidly constructing and reconstructing our understanding of what’s in front of us based on the input we acquire from our environment. Despite living in a 3D world, we document information from our surroundings as a flat image. So our visual processes evolved to make what Fischer calls “reasonable assumptions.” 

“You’re using these two flat images to try to recover depth in the scene, [but] it’s massively under-determined, meaning that there’s not enough information to perfectly know the 3D shapes and positions of objects in the scene,” Fischer says. “So you’re going to have to make some best guesses about it.” 

Remember, one of the things we look for is cues about depth. That can include light and shadows, which play an important role in helping our brain discern whether something is convex or concave, but also texture, says Arthur Shapiro, a professor at American University and an editor of the Oxford Compendium of Visual Illusions. A coarse gradient that moves into a finer one, for example, will make a 2D image appear to be slanting away from us. 

You can explore this trick with more than just craters. “In aerial photography where you have canyons with lots of shadows, often the canyons will look like mountain ranges, and the mountain ranges will look like rivers,” Shapiro explains. “That’s because you’re no longer in a world where lighting is coming from above. It’s coming from the left or the right.” That’s why astrophotographers often use binoculars to help the final image appear correctly to the naked eye.

Even though our visual perceptions are limited, they really are doing the best they can with what they’ve got.  

Read more PopSci+ stories.