John Rennie and "the Beast" aboard the _ Wave Sentinel_ in the port of Dorset, England
John Rennie and "the Beast" aboard the _ Wave Sentinel_ in the port of Dorset, England. Jonathan Worth
SHARE

For the past five years, John Rennie has braved the towering waves of the North Atlantic Ocean to keep your e-mail coming to you. As chief submersible engineer aboard the Wave Sentinel, part of the fleet operated by U.K.-based undersea installation and maintenance firm Global Marine Systems, Rennie–a congenial, 6’4″, 57-year-old Scotsman–patrols the seas, dispatching a remotely operated submarine deep below the surface to repair undersea cables. The cables, thick as fire hoses and packed with fiber optics, run everywhere along the seafloor, ferrying phone and Web traffic from continent to continent at the speed of light.

The cables regularly fail. On any given day, somewhere in the world there is the nautical equivalent of a hit and run when a cable is torn by fishing nets or sliced by dragging anchors. If the mishap occurs in the Irish Sea, the North Sea or the North Atlantic, Rennie comes in to splice the break together.

On one recent expedition, Rennie and his crew spent 12 days bobbing in about 250 feet of water 15 miles off the coast of Cornwall in southern England looking for a broken cable linking the U.K. and Ireland. Munching fresh doughnuts (a specialty of the ship’s cook), Rennie and his team worked 12-hour shifts exploring the rocky seafloor with a six-ton, $10-million remotely operated vehicle (ROV) affectionately known as “the Beast.”

Long Arm of the Beast

The Beast is like a lunar lander on steroids. Working at depths of more than a mile, it can trundle along the seabed on caterpillar treads or, when its thrusters kick in, skim above canyons like a hovercraft, at a top speed of three knots. Rennie and his team of six control the Beast via a joystick, using its sonar, video cameras and metal detector to locate damaged cables. Plucking a cable from the ocean floor is akin to picking up a piece of thread in a blizzard while wearing a catcher’s mitt. Currents can be fierce, which makes it difficult to hold the Beast steady above the cable. Visibility can be close to nil, which means that even finding the cable in the first place can be a long and frustrating process of trial and error. But according to Rennie, “gripping and cutting is the trickiest.” This delicate piece of submarine surgery has to be performed quickly and cleanly, using only a murky video image as a guide.

When Rennie found the U.K.-Ireland cable–fishermen had cut it after it became entangled in a dragnet–the Beast’s manipulator arm grabbed it, sliced it clean, and brought each end to the surface. On board the ship, the cable was repaired and x-rayed (Rennie needed to make sure the splice was set right, as with a broken bone), then tested and lowered to the seafloor. “There is no time for celebration when we fix a cable,” Rennie says. “There is lots of pressure from cable owners to move quickly. They are losing revenue.”

Most cable breaks go unnoticed by users. Maybe a YouTube clip will take someone a nanosecond longer to download, but that’s about all anyone might notice when a single cable snaps. There are so many different lines connecting so many different places—a map of the network looks like the inside of a baby grand: strand after strand of cable stretching across the ocean floor like so many piano wires that service providers can usually reroute around any break. But if several cables snap in chorus, as they did several times in the past two years, big problems result.

Last December 19, when three cables under the Mediterranean Sea were damaged, Internet service began to wink out across the Middle East and parts of Southeast Asia. Egypt suffered terribly, losing as much as 80 percent of its network. E-mail and Web access were disrupted in Saudi Arabia and other Gulf states, while services fluttered in countries as far away as Malaysia and Taiwan. India’s enormous outsourcing industry—the customer-service backbone of the Western world—was also hampered, with the humble fax machine making a brief but crucial comeback until traffic was rerouted around the breaks. The same thing had also happened in January and February, disrupting Internet access to homes and businesses throughout the region for days.

The incidents reveal a surprising fact about the Internet: that it requires constant physical maintenance. Without people like Rennie patching cables, the entire network would gradually stop. First, traffic would slow to a crawl as more bits crammed into fewer and fewer cables. Then, after a while, isolated service failures like the ones in the Middle East would pop up. Eventually, as line after line went dark, U.S. businesses would be cut off from their outsourced functions abroad, international e-mail traffic would halt, and global financial transactions would cease. Pockets of connectivity would persist, but ultimately the Internet we rely on to stay in touch with the rest of the world would be reduced to the local-area network in your office.

On the next page, see our animated graphic of how the web works.

Where is the Web?

As Wi-Fi hotspots proliferate, making wireless connections commonplace, many people have come to regard the Internet as something that’s simply in the air. Ask the average person how it’s carried, and they are likely to mumble something about satellites.

But satellites carry less than 10 percent of all Internet traffic. The Internet is, in fact, inside the more than 500,000 miles of undersea cables like the ones Rennie fixes. It is in the hundreds of Internet hubs around the world, concrete landing points where these cables come ashore and branch back out again through terrestrial networks. It is in the hundreds of thousands of miles of land-based cables that crisscross the continents, bringing the Web to individual businesses and homes. The Internet is actually a vast physical infrastructure, awesome in its complexity–and its vulnerability.

“Most people don’t realize how information moves around the globe,” says Paul Kurtz, a former member of the National Security and Homeland Security councils who now advises corporations and governments on critical infrastructure protection with Good Harbor Consulting. “The telecommunications network has morphed into the Internet, and there are vulnerabilities all along the line.”

Cyber-attacks like those launched against the republic of Georgia during last summer’s war with Russia will continue to grab headlines, but attacks on the Internet’s physical infrastructure could be even more devastating. “Physical attacks are less likely, but they are more damaging and harder to recover from,” says Don Jackson, director of threat intelligence for information-security firm SecureWorks in Atlanta. “We are so much better prepared for virtual attacks that [for terrorists] a physical attack is a very attractive alternative.” Given how much of our financial, commercial and social lives have moved online, the repercussions from such an assault—and a resulting widespread failure—would be immense.

A Series of Tubes

A Series of Tubes

At Terremark’s Miami headquarters, undersea Internet cables emerge from the Atlantic and connect to the rest of the country

Choke Points

When the Middle East cables went down the first time back in January and February of last year–three cables were cut within a period of about 48 hours–observers assumed it was sabotage. Why? Because that kind of scenario had been rehearsed before.

“During the Cold War, lots of attention was paid to undersea cables,” says James Lewis, director and senior fellow of the Technology and Public Policy Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Communications lines were prime military targets for both sides, and the strategic severing of cables was considered a prelude to full invasion. In the early 1970s, the U.S. even managed to successfully tap a cable on the ocean floor and eavesdrop on Soviet chatter.

None of the Middle East cuts were deliberate, however. The December outage appears to have been caused by undersea seismic activity and, in the January and February incidents, stray anchors were to blame. But according to Lewis, “the [January-February] cuts affected the ability of CentCom [U.S. military Central Command] to send communications from Afghanistan and Iraq. Video and data streams are crucial parts of military operations, and they need that fiber-optic cable infrastructure.” CentCom quickly rerouted around the gaps, but the incident exposed a vulnerability.

The Middle East is particularly prone to faults because the ties that bind it to the rest of the Internet are thin when compared with the connection between the U.S. and northern Europe or Asia. The cables that went down last year carry upward of 75 percent of the traffic between Europe and the Middle East. The shortest cable here is 12,400 miles long, and traffic between sites in southern Europe and sites in Australia, China, Japan and other points east moves through only a handful of places. A single break in this region is immediately noticeable; two could be crippling; three could have been catastrophic if providers had not diverted traffic away from the cuts, located off the coast near Alexandria, Egypt, through Asia.

The Middle East is not the only place where the Internet’s undersea cable network hits a bottleneck. In December 2006, an earthquake ripped cables running through the Luzon Strait, in the South China Sea between Taiwan and the Philippines, disabling 90 percent of the region’s telecommunications capacity. Basic services were restored in a day or two, but full repairs to the cable system took more than a month.

Cable Network

Cable Network

Terremark’s Miami facility is one of the world’s most wired spots

Demand and Supply

The first undersea cable was laid in 1850, under the English Channel between Dover and Calais. It consisted of copper wire waterproofed by a layer of hard, inelastic rubber made from gutta-percha trees. Lead weights were attached to keep it on the seafloor. The cable carried telegraphs for three days—until French fishermen accidentally cut it.

Modern fiber-optic cables are more reliable and numerous. Today, between 250 and 300 cables beneath the ocean floor are active at any given moment. And as demand for bandwidth grows (international traffic increased 53 percent between mid-2007 and mid-2008, according to research firm TeleGeography), more and more are needed.

Some of the cables Rennie spends his days fixing reach land near Miami. Here, the cables are bundled and shunted along under the old Florida East Coast Railway that ran from Miami to Key West until 1935, when Hurricane Islamorada wiped it out. The East Coast line was known as the “overseas railroad” because of all the bridges and viaducts it had to cross to reach Key West. Now it is the preferred track for the undersea cables that surface in central Miami, home to Terremark, one of the most important telecom firms you’ve never heard of.

Terremark’s six-story, 750,000- square-foot headquarters has no windows. They’re unnecessary, since the main occupants of the building are computers: server racks owned by the likes of Deutsche Telekom, Facebook and the U.S. Department of Defense, not to mention the Internet Corporation for Assigned Names and Numbers, the organization that issues domain names, and VeriSign, which provides the infrastructure for secure online financial transactions.

Coastal Connections

The Terremark facility is among the most wired sites in the world. It is one of dozens of Internet exchanges in the U.S., located mostly on the coasts, that gather the undersea cables and disperse them over land across the country. They are the unmissable links in America’s–and the world’s–telecom network. Some 90 percent of the Internet traffic between North America and Latin America goes through Terremark in Miami, for example, and according to TeleGeography, that traffic grew 112 percent from mid-2007 to mid-2008.

The cables that enter Terremark from the Atlantic rise up through the floor at various spots like bouquets of plumbing. The main cables splay out along metal trellises to the 160 or so clients that have servers onsite.

Eventually, the cables all return to a glass-enclosed, spotlessly clean space known as the “meet-point room.” The meet-point rooms–there are at least two of them on each floor of the building–are the gateways to the Internet. They are where the cables from individual carriers are patched into the land-based cable network that radiates out from Terremark, connecting service providers with other exchange points, and from there connecting to individual homes and businesses.

Constant Caretakers

Technicians are in the building at all times. They take care of the routine maintenance–tightening a loose connection here, rewiring a patch panel there–needed to keep the Internet running. Even in a hurricane, the building is staffed. Twenty-four hours before a storm hits, all essential personnel–generally, a team of 30 technicians–are already inside. They don’t come out until the hurricane has passed. And everything is monitored from a NASA-like network-operation center elsewhere in the building.

Terremark, like any Internet exchange, is vital to the network. That’s why the walls are made of seven-inch, steel-reinforced concrete that can withstand the 155mph winds of a Category 5 hurricane. That’s why environmental control is fanatically precise, keeping condensation off the circuits and the thousands of servers cool. That’s why on top of paying $630,000 a month for electricity, the building also maintains its own bank of diesel generators as a backup. And that’s why, if any disaster were to strike Miami, restoring Terremark’s power would have priority along with hospitals and the police. In short, it would be very bad if something happened at Terremark. “If a service provider goes down, it’s terrible,” says Derrick Cardenas, Terremark’s regional vice president of commercial sales. “If we go down, it’s global.”

Scale-Free Networks

Terremark and the other exchanges scattered across the country (Chicago, New York and Los Angeles are just a few of the other locations) are so vital because the Internet is a “scale-free network.” In a scale-free network, connections are not randomly or evenly distributed. Some points have relatively few connections to other points (a single server in the basement of a small business, for example), and some points—known as hubs—have a relatively huge number of connections to other points (Terremark). This ratio of very connected hubs to less-connected points remains roughly the same no matter the network’s size (hence “scale-free”). The hubs are both a strength and a weakness. If one hub fails, the others can take up the slack. If several hubs go out of service, however, whole sections of the network can become isolated.

“The main feature of a scale-free network is that a few highly connected hubs hold the network together,” says Albert-Laszlo Barabasi, director of the Center for Complex Network Research at Northeastern University, who did some of the earliest studies of scale-free networks. “If you remove one hub, the network will not fall apart; the smaller hubs will maintain it. But if you [simultaneously] knock down a sufficient number of hubs, there will be quite a lot of damage.”

Tubes

Now Send This

Tubes like these connect you to the rest of the world

Doomsday Scenario

Is there a guaranteed way to eliminate the threat? According to Barabasi, no. This is a property of scale-free networks, he says. “You can’t eliminate this vulnerability. There is no patch for it.”

In the event of major hub failure, Barabasi believes the only option is damage control. He cites research by Adilson E. Motter, an assistant professor in the department of physics and astronomy at Northwestern University, showing that the selective removal of additional hubs immediately following a disaster can contain the damage around the stricken site. By shutting down the hubs most closely connected to the one under attack, you can prevent the failure from cascading through the entire network, Barabasi says. “If you shut down the hubs around an infected hub, the damage can be controlled.”

Ultimately, the only real defense is to make Internet exchanges impregnable. Terremark’s newest facility is in Culpeper, Virginia, 60 miles southwest of Washington, D.C.—just outside the blast zone should a nuclear strike hit the capital. The facility is surrounded by a 10-foot-high earth berm, guards patrol the perimeter, and the staff includes Department of Defense–trained antiterrorism personnel.

“People have been worried about attacks on [hubs] since the Cold War,” says Lewis of the CSIS. For instance, “since Eisenhower, the telecommunications network has been hardened against nuclear attack.” What keeps SecureWorks’s Jackson awake at night is the prospect of a chemical, biological or dirty-bomb attack on a hub like Terremark. If no one can enter the building to staff the meet-point rooms, and everyone inside is already dead, it won’t be long before things start to fall apart. “There are so many different ways things could go wrong,” he says. “Only one or two hardware faults can cause a cascade of failures that need constant manual intervention to resolve. You’d be lucky to limp along for two days until something catastrophic happens.”

In Case of Code Red

Even something less than an all-out assault—a hybrid virtual and physical attack, for instance—might be enough to bring down an Internet exchange. If terrorists managed to gain remote access to a facility’s command-and-control system, they could, for example, cause the generators to overheat and explode. That would take out the cooling system and, soon enough, the meet-point rooms would be filled with the smell of burning motherboards.

If such attacks happened simultaneously at a sufficient number of hubs, the principles of scale-free networks dictate that the entire Internet could come down. Statistics on these types of assaults are hard to come by, but there were, for example, an average of 2,332 attempted virtual attacks each day on the supervisory control and data acquisition (SCADA) systems of SecureWorks’s utility clients last September, according to the firm. Only a small fraction of these attacks targeted actual command-and-control systems, but the sheer number of attempts is itself a cause for concern.

In fact, a successful command-and-control attack has already taken place in the U.S. In March 2007, the Department of Homeland Security staged an assault on a massive diesel generator of the kind used to run power plants and Internet exchanges. Hackers managed to gain control of the machine and cause it to self-destruct. Even a single exchange attacked in this way would take months to repair, according to John Bambenek, an information-security researcher who scans the Web for cyber-attacks as an incident handler with the Internet Storm Center, an early-warning network staffed by volunteers. “If two or three went out, you would run into manpower problems,” he says. “There is not enough staff anywhere to do it. We are not as redundant as we think we are.”

Check out the most powerful drill, the NASA escape pod, and more, in Popular Science‘s special feature on How It Works!