Like so many stories in the world of digital security, this one began with simple human carelessness. In 2006, a senior official in the Syrian government brought his computer with him on a visit to London. One day, he stepped out of the hotel and left the laptop behind. While he was out, agents from Mossad, the Israeli intelligence agency, snuck into his room and installed a Trojan horse onto the machine, which allowed them to monitor any communications.
For the Syrians, that would have been bad enough, but when the Israelis began to examine the official’s files, a photo caught their attention. It showed an Asian man in a blue tracksuit standing next to an Arab man in the middle of the desert. It could have been an innocuous meeting of friends, even a vacation photo. But Mossad identified the two men as Chon Chibu, a leader of North Korea’s nuclear program, and Ibrahim Othman, director of the Syrian Atomic Energy Commission. When they paired the image with other documents lifted from the hard drive, such as construction plans and photos of a type of pipe used for work on fissile material, the Israelis came to a disturbing conclusion: With aid from North Korea, the Syrians were secretly constructing a facility at al Kibar to process plutonium, a crucial step in assembling a nuclear bomb. An International Atomic Energy Agency investigation would later confirm their suspicions.
Troubled by this revelation about their openly hostile neighbors, the Israelis mounted Operation Orchard. Just after midnight on September 6, 2007, seven Israeli F-15I fighter jets crossed into Syrian airspace. They flew hundreds of miles into enemy territory and dropped several bombs, leveling the Kibar complex. The Syrian air-defense network never fired a shot.
The security failure wasn’t because all Syrian radar officers turned traitor that night. Rather, their technology did. If planting the Trojan horse into the Syrian official’s laptop was an act of cyberespionage—uncovering secret information by digital means—Operation Orchard was its armed cousin. Prior to the bombing, the Israelis had penetrated the Syrian military’s computer network in such a way that they could monitor their adversaries’ actions. More importantly, the Israelis were able to direct their own data streams into the air-defense network. Once inside, the Israelis introduced a false image of a radar screen, misleading Syrian radar operators into believing all was well—even as enemy jets flew deep into their airspace. By effectively turning off Syria’s air defenses for the night, the Israelis gave the world a chilling glimpse of the future of cyberwar.
A New Type of Warfare
The mainstream media uses the term “cyberwarfare” to describe everything from large-scale Web-based crime to the latest online maneuverings in places like Ukraine, but few outlets have explained how it applies to actual military operations. When nations develop the ability to unleash their armed forces onto a digital battlefield, they carry the potential to reshape warfare much as they did a century ago when they opened the sky with rockets and planes.
Today, more than 100 of the world’s militaries have some sort of organization in place for cyberwarfare. The Fort Meade complex in Maryland, which is home to the National Security Administration (NSA) and Cyber Command, contains more personnel than the Pentagon, while Datong Road in Shanghai is the reported home of Unit 61398, a Chinese group linked to hacks on everything from U.S. military communications to the New York Times’s internal email. These organizations’ size, scale, training, and budgets differ, but they share the same goals: to “destroy, deny, degrade, disrupt, [and] deceive,” in the words of the U.S. Air Force. At the same time, they aim to defend against the enemy’s use of cyberspace for the same purpose. Among military planners, the paradigm is known as the “five D’s plus one.”
Interest in this type of capacity is skyrocketing. In the 2012 U.S. defense budget, for instance, the word “cyber” appeared 12 times. This year, it showed up 147 times. New funding included everything from work on covert infiltrations similar to Israel’s Operation Orchard to broader efforts like Plan X, a $110 million program that, according to one published report, will help war planners rapidly assemble and launch online strikes and make cyberattacks a more routine part of U.S. military operations. Officials are also engaging in broader debates, such as how such units should be organized. One proposal is to place them under entirely new military services, similar to how the U.S. war department a century ago organized air-based units under the Signal Corps (and later the Army Air Corps) before forming the Air Force.
No matter how those debates play out, what many call a new type of warfare actually has much in common with traditional combat operations. The computer is just another weapon in the arsenal. As with the spear or the airplane, it’s a tool to help achieve the goals of any given operation.
Before battle begins, a smart commander starts by gathering intelligence. In World War II, the Allies’ ability to crack Axis radio codes proved crucial to victory. As the Israelis showed with Operation Orchard, intercepting digital communications is still the first step in modern warfare because infiltrating networks and gathering information is so useful in laying groundwork for more aggressive action. Military officials have used these tactics in the Pacific as tensions have escalated in recent years. Chinese hackers have reportedly targeted U.S. armed forces networks for intelligence on anything from unit-deployment schedules to the logistics status of American bases in the Pacific. And as the NSA documents leaked by Edward Snowden showed, U.S. cyberunits are working equally hard to gather information about their potential adversaries in China.
Unlike World War II code breaking, cyberattacks offer the potential to not just read the enemy’s radio, but to seize control of the radio itself.
What makes digital warfare different from past intelligence programs is how operations can fluidly transform from merely collecting information into taking aggressive action. Unlike World War II code breaking, cyberattacks offer the potential to not just read the enemy’s radio signals, but to seize control of the radio itself.
As the Israelis demonstrated, if war planners can compromise an enemy’s networked communications, they move from knowing an adversary’s actions, which is a major advantage on its own, to potentially changing them. Hackers could disrupt an enemy’s command and control, barring officers from sending out orders and units from talking to each other, or they could prevent individual weapons systems from sharing critical information. More than 100 American defense systems, from aircraft carriers to individual missiles, rely on GPS coordinates during operations. In 2010, a software glitch knocked 10,000 military GPS receivers offline for more than two weeks, meaning everything from trucks to the Navy’s X-47 prototype robotic fighter jet suddenly couldn’t determine their locations. Cyberwarfare would make such a software error into a deliberate act, causing mass confusion and miscommunication. Earlier this year, for example,
Ukrainian forces in Crimea found themselves cut off electronically from their commanders during the Russian occupation. Isolated, outgunned, and unsure what to do next, they surrendered without a fight.
But disabling or jamming an adversary’s networked communications is “loud,” to use cyberterminology. In other words, the effect of the attack is obvious, so a victim knows the system is compromised. A subtler attacker might instead seek to corrupt information within his targets, sewing erroneous reports that appear to come from inside the organization. The military has traditionally used the term “information warfare” to describe operations that aim to influence an enemy’s decision-making. The objectives might be highly strategic, such as planting false orders that appear to come from top leaders, or more tactical insertions, like when the Israelis co-opted the Syrian air-defense network.
Such attacks on the data itself, rather than just the flow of it, could have immediate battlefield consequences—but they could have even more impact in the long term. Military communications rely on trust. By corrupting that trust, a hacker compromises not only computer networks but also the faith of those who rely on them. Only a relatively small percentage of attacks would need to succeed in order to plant seeds of doubt about any electronic information. Users would begin to question and double-check everything, slowing decision-making and operations to a creep. In the most extreme scenario, a breach of confidence could lead militaries to abandon networked computers for any critical information, setting their capacity back decades. According to one military planner, “It could take forces back to a pre-electronic age.”
Such technological abstinence sounds unthinkable, especially when computers have proven so useful in modern war. But imagine if you had a memo you needed to get to your boss at the risk of losing your job. Would you email it if there were a 50 percent risk of it being lost or changed en route? Or would you just hand-deliver it? What if the risk were 10 percent? How about even 1 percent? Now, apply the same risks to a situation in which it’s not your job at stake, but your life. How would your behavior change?
Digital Battles of Persuasion
In 2012, a surveillance drone cruised over a stadium in Austin, Texas, following a GPS-guided course on what appeared to be a typical operation. Without warning, the unmanned vehicle swerved off its pre-programmed route, banking hard to the east of its destination. Not long after that the drone made another errant course adjustment, hurtling south, before finally altering its flight so that it was headed straight toward the ground.
Fortunately, this was a test, not a real-life catastrophe. The Department of Homeland Security had recruited a team of engineers from the University of Texas’s Radionavigation Laboratory to see if it could hack an airborne drone’s flight computer, and the group proved up to the challenge.
Drones, or unmanned aerial vehicles, have become one of the most important technologies in war. They provide surveillance and deliver supplies, and they can unleash missiles on unsuspecting targets. The U.S. military has more than 8,000 such aircraft, including the famous Predator and Reaper, and another 80-plus nations now have military robotics programs. Yet removing humans from aircraft has created new and unforeseen vulnerabilities. Every robotic system links to a computer network that provides operating instructions and its GPS location. The same technology that allows drones to strike targets thousands of miles away also opens up avenues for disruption or even co-option. Consequently, we’re entering an era of what could be called digital battles of persuasion.
In one 2013 Pentagon war game, players explored how they might use a stuxnet-virus-type weapon to send an enemy navy on what they jokingly called a “Carnival cruise line experience.”
No one can co-opt the flight of a bullet, nor has anyone ever been able to brainwash a bomber pilot in midair. But if hackers can compromise robotic weapons systems, they could “persuade” them to do the opposite of what their owners intended. The result would be an entirely new type of combat, in which the goal is not merely to destroy the enemy’s tanks but to make them drive in circles—or even attack each other. In the best-known real-world example of this, the U.S. and Israel used the Stuxnet virus to sabotage computer-directed Iranian centrifuges. The virus caused the machines to malfunction, setting the Iranian nuclear-weapons program back for months. In one 2013 Pentagon war game, players explored how they might use the same kind of weapon to send an enemy Navy on what they jokingly called a “Carnival Cruise Line experience.” Instead of launching missiles to destroy the fleet, a Stuxnet-style attack on warships’ engine systems would set a threatening fleet adrift without power.
The potential for these types of attacks is nearly limitless. In 2009, an employee at the Shushenskaya dam in Siberia turned on an unused turbine with a few mistaken keystrokes, leading to a massive water release that destroyed the plant and killed 75 people. The disaster was an accident, but an enemy could recreate something like it deliberately—just as when Allied planes in World War II and Korea dropped bombs on dams, creating floods that decimated miles of enemy targets. The difference in cyberwarfare is that no aircraft would ever have to leave the ground.
Cyberwar Is Civilian War
As in traditional war, what sounds easy in planning can prove hard in execution. Target systems are complicated, and so are the operations needed to exploit them—especially because every battle has at least two sides. As described by the great thinkers of war Sun Tzu and Clausewitz, for every tactic and strategy, a savvy foe is developing a counter.
These challenges drive adversaries to pursue what are known as “soft targets.” In theory, war is a contest among warriors. In reality, more than 90 percent of conflict casualties in the last two decades have been civilians. It would not be surprising to see the same dynamic in cyberwar.
The most conventional approach would be to attack any civilian networks and operators that support the military. Those could be private contractors, who provide much of the supply and logistics support to modern armies (about half of the American force in places like Afghanistan and Iraq were actually hired hands), or basic infrastructure such as ports and railroads. Just as merchant ships typically made easier targets than warships in past conflicts, civilian computer networks tend to not have the same levels of security as military ones. That makes them particularly appealing marks. In one 2012 Pentagon-sponsored war game, a simulated enemy force hacked a contractor that coordinated and delivered supplies for a U.S. military force. The goal was to transpose bar codes on shipping containers. Had it been a real attack, American field troops would have opened a shipping pallet expecting ammunition only to find toilet paper.
History shows that it’s not just the civilians who provide support for the armed forces who might land in the line of fire. When new technologies like the airplane and long-range missiles expanded military reach beyond the front lines, planners gradually expanded who and what they considered legitimate targets. By the end of World War II, all sides were engaging in strategic bombing against the broader populace, arguing that the best way to end the war was to drive home its costs to civilians. As cyberwarfare becomes a reality, the same grim calculus will likely hold true.
With cyberweaponry still in its infancy, it’s still too early to map its full impact. In the early days of aircraft, military planners laid down a number of predictions. Some proved right, like the idea that planes would bomb cities, while others proved woefully wrong—for example, the notion that the craft would render all other forms of war obsolete.
Yet for all the ways it could change how we engage in military operations, cyberwarfare’s greatest legacy may not be any single capability or function. More likely, it will be how this new form of engagement mixes with other battlefield technologies and tactics to create something unexpected. The airplane, tank, and radio all appeared during World War I, but it wasn’t until the Germans brought them together into the devastating blitzkrieg in the next global conflict that they made their lasting mark.
As we watch the situation develop, we’ll be left to ponder a tragic irony. The Internet may have started out as a Defense Department project, but it has since become one of the world’s greatest forces for political, economic, and social change. That dual history should make it unsurprising that cyberspace will play a central role in the future of global conflict, but it should also make us a bit sad. War, even one fought with zeros and ones, will still remain a bitter waste of resources.
This article originally appeared in the September 2014 issue of Popular Science.