In a virtual world, hearing is as important as seeing. When you walk on a moon, you want to hear footsteps in the dust. So it’s no wonder virtual-reality headset-makers like HTC, Oculus, and Sony have invested in 3D audio engines that immerse the user in location-specific sounds. San Diego headphone-maker Ossic, however, says that’s not enough. “To get accurate 3D audio, you have to take the individual ear into account,” says Ossic co-founder and CEO Jason Riggs. Because no two sets of ears are alike, Ossic created headphones calibrated to a user’s physiology, delivering the most true-to-life sounds in VR yet.
1) Head Shape
Sound reaches each ear at different times, depending on proximity to the source. Head size greatly changes the length of that perceived delay. Ossic put sensors in the ear cups and headband that measure the distance between your ears, allowing an onboard processor to re-create those delays.
2) Ear Shape
The external curves of our ears help our brains figure out where sounds come from: above, below, in front, behind. Ossic engineers surround each ear with a group of four drivers, and use motion sensors to tell onboard software which drivers to use, creating an accurate illusion.
Virtual reality requires that we look around the world we’re immersed in. So the location of noises needs to update as quickly as the images. Onboard accelerometers, gyroscopes, and a compass keep track of head movement to update audio within milliseconds of your slightest twitch.
This article was originally published in the May/June 2016 issue of Popular Science, under the title *Bespoke 360 Audio.“