Computers already do so much of our work that it seems natural to let them take care of our sabotage, too. This might have been the line of thinking that led to Stuxnet, the first known malware worm designed to disrupt industrial processes.
Stuxnet began spreading in early 2009. It's unclear who wrote the worm, or why, but a consensus is gathering that a government (probably Israel's) created the worm to sabotage the Iranian nuclear program. Stuxnet seeks out and silently hijacks factory control software written by Siemens. Once it infects the computers running that software, it can command uranium-enriching centrifuges to spin out of control, thereby destroying them.
Computers such as these are kept disconnected from the Internet, so getting Stuxnet to its target probably required someone to load the worm onto a USB stick used by a plant employee or contractor. Once that person plugged his USB stick in at work, Stuxnet probably began crawling through the plant's local-area network, searching for the right computer to hijack. Exactly how much damage it did, we don't know, but the Iranian nuclear program did appear to slow, and last November Iranian president Mahmoud Ahmadinejad confirmed that a malicious computer program caused "problems" with centrifuges.
But somehow Stuxnet also made it to the Internet, and by last fall, the worm had infected hundreds of thousands of computers in at least 155 countries. Siemens says that to date, only 15 industrial facilities worldwide have been infected. So far, nothing has happened that we know of, but the potential for mischief is still high.
The operating system that Stuxnet is designed to hijack controls pipelines, conveyor belts, boilers, alarm systems and access controls, among other things, and experts warn that it and similar worms could, in theory, blow up factory boilers, destroy gas pipelines, sabotage power plants, and disrupt power grids. Now that Stuxnet has given hackers a sophisticated blueprint for creating such worms, many cybersecurity wonks are thoroughly freaked out.
How We Can Do Better
Cybersecurity consultant Stephen Spoonamore says that we should immediately redesign all critical infrastructure to "fail open," meaning that in the event of a cyber attack, the system will default to a basic operating mode and keep running.
Also, What Could Possibly Go Wrong with
The end of all that cyber warfare is of course skynet and bye bye us!
I think googlenet is going to self relize in 2023
why werent these things designed fail oppen in the first place? and how does someone know if the virus is taking over? and if they figure it out do they have to push a botton or something or what? I know this is old news but these are my questions take them as they are.
I'm sure that the companies supplying process control software are working feverishly on this issue. Physical security in various plants (e.g. restricting USB access ) has probably neen increased as well.
Will it be enough? Only time will tell, but the internet has so far survived some periods of mass attack fairly well -- and it's by nature a lot less secure.
I've seen this Israeli reference before. Does anyone actually have even a shred of evidence? If not, it's an awfully vicious statement to make. ( I have NO connection to Israel).
As for Mr. Spoonamore, nice generic thought.