Why our devices make the sounds they make
Every click, clack, and whoosh serves a specific purpose.
Brian Eno was in a rut. The English artist had built a career producing legendary acts like David Bowie, The Talking Heads, and U2. But in the early 1990s, “I was completely bereft of ideas,” he told the San Francisco Chronicle. “I’d been working on my own music for a while and was quite lost, actually.”
Then Microsoft called.
At that time, the Redmond, Washington-based technology company was preparing to launch Windows 95, it’s most user-friendly operating system to date. And they need a startup song, for the moments between a user pressing the “on” button and the computer actually being ready to use.
“The thing from the agency said, ‘We want a piece of music that is inspiring, universal, blah- blah, da-da-da, optimistic, futuristic, sentimental, emotional,’ this whole list of adjectives, and then at the bottom it said ‘and it must be 3 1/4 seconds long,'” Eno told the Chronicle. “In fact, I made 84 pieces. I got completely into this world of tiny, tiny little pieces of music.”
The project, which Eno says liberated him from his creative block, marked an important moment in the increasingly close relationship between our devices and our ears. While we rarely reflect on the sounds our laptops, cell phones, and tablets make—and few today play “piece[s] of music” quite like Eno’s Windows 95 composition—every click, clack, and whoosh is crafted crafted. Across platforms, software engineers, user experience designers, and sound branding experts share a common goal: to help us make sense of our technology, and keep us coming back for more.
Age of the earcon
Companies have use sounds to subtly reinforce their brand’s message for almost a century. Early examples include the NBC chimes, which received the first trademark for sound alone from the U.S. Patent and Trademark Office, and MGM’s lions, which first roared in 1928.
But the field really rose to prominence in the 1990s, alongside the rapid development of new consumer technology. Phone companies may have all offered the same service, but they wanted to stand out from their competitors, and hopefully draw more customers in the process.
“There’s a huge element of branding with sound,” says Karen Kaushansky, a user experience designer with more than 20 years of experience. “When you’re building a product for a certain company, what is the meaning we want to put into that sound? And does the brand itself have some audio characteristics or components that can go into the sound?”
In the late 1980s, Apple was faced with its own sonic branding problem. Every time Macintosh computers rebooted, they played called “the devil’s interval.” “It’s any two tones that are three whole steps apart and played at the same time, like middle C plus the F# above it,” the authors of The Sonic Boom: How Sound Transforms the Way We Think, Feel, and Buy explained. “It’s disconcerting, provoking a feeling of agitation and anxiety.”
So Jim Reekes, one of Apple’s engineers, decided to change it. “I thought, I gotta have this meditative sound,” Reekes told The Sonic Boom authors Joel Beckerman with Tyler Gray. “I used to joke about it being a palate cleanser for the ears.” He found what he was looking for in a fading C-major chord in stereo. Company executives were opposed to the “earcon” (a word that means, roughly, “sound icon”), but Reekes managed to sneak his calming chord onto the Macintosh Quadra 700 computer, which debuted in 1991, anyway. Just as Reekes anticipated, the sound was a hit with users.
Sound has proven a natural fit in a UX designer’s attention-grabbing toolkit. Bright colors and lights keep our eyeballs hooked, and variable content means our brain’s itch is never totally scratched. It’s even easier to attract our ears. “The specific range, where a baby cries—a lot of devices are tuned to that frequency,” says Amber Case, a user experience designer and author of the forthcoming book, Designing Products with Sound: Principles and Patterns for Mixed Environments with Aaron Day.
The Marimba ringtone is just one example of sound design capitalizing on our sensitivity to these frequencies. In the 1950s, Bell Labs, which grew out of telephone inventor Alexander Graham Bell’s research facilities, was hard at work studying ringers. They tested sounds of all stripes, from musical trills to buzzers. They found that sounds in the range of human hearing that pulsed from near-silence to full sound over a period of 3 to 5 seconds were most successful at capturing our attention.
The Marimba meets all of these criteria, which is why it became one of iPhone’s most successful ringtones. “The sound is unique enough that the human brain could easily detect the sound even when layered in a crowded soundscape,” sound consultant Brian Rommelle wrote in a short history of the tone. “It is as annoying perhaps to us today as the original [B]ell telephone ringers were to our grandparents, but in the end, that’s the point.”
Sounds of the future
Sound design is also important in guiding a user through a potentially complicated interface. When an iPhone user types, for example, they hear a click-clack sound like the keys on a keyboard or typewriter. “You need to have these metaphors,” says Cliff Kuang, author of the forthcoming book “User Friendly: How the Hidden Rules of Design are Changing the Way We Live, Work, and Play” with Robert Fabricant. “This is how new experiences get introduced.”
The keyboard sound, and others like it, are examples of skeuomorphism, a common device that builds associations by mimicking an action’s real life counterpart. It’s the same principle behind the crumpling sound you hear when you put a document (that looks like a paper file) in a waste bin (that looks like a real garbage can, loaded with already crumpled papers) on a Macbook.
“The most famous examples among UX designers is the actual sound of the lock screen on an iPhone,” Kuang says. “It was an association between, ‘Oh, yeah, I can feel that lock snapping because I heard that sound.’ There was a sort of synesthesia to it.” Of course, brand still matters—every platform has their own sound dictionary, with slightly different dings and pings—but usability is at the core of this practice. “A whoosh is a whoosh is a whoosh, but everybody designs that swooping sound slightly differently,” Kuang says.
In recent years, some UX designers have begun to question whether sticking with the familiar skeuomorphic approach still works, or if it’s time make more useful and intuitive sounds from scratch. For her part, Kaushansky is currently hard at work on designing sonic experiences for robots and autonomous vehicles. When asked if driverless cars will mimic sounds in current cars, for nostalgia’s sake or the comfort of passengers and pedestrians, she’s skeptical.
“Horns are kind of dumb. There’s no meaning in the horn you hear. You hear a honk and look around, like, was that for me? There’s no way to know right now,” she says. “We could be doing a lot better job between a ‘toot toot’—a ‘thanks’—versus and ‘errr’—a ‘get out of the way!’”
By disentangling emerging technology from ineffectual practices of the past, Kaushansky thinks “we could make our roads safer, or better, or even more interesting.”
As certain chirps become familiar, tastes change, or design theory moves in a new direction, many sounds are eventually retired. Microsoft stopped supporting Windows 95 in 2001. And Apple phased out startup sounds for Macbooks in 2016. But the best earcons live on, in countless YouTube clips and the memories of old school users. And if designers and engineers have their way, the sounds currently under development for autonomous vehicles, robots, or revamped social media platforms will have the same effect on users of the future.