It’s 2 a.m. on November 3. The polls have been closed for hours, but the election has yet to be called. Around the country, reports of snafus with new electronic voting machines have been pouring in; no one is sure how these problems have affected the results. In Maryland, machines failed to boot up, and voters were turned away for hours. In South Carolina, officials bought machines too late for adequate testing, and on many of the onscreen ballots, the presidential contest included names of candidates from local elections. Several Texas counties are thousands of votes short because a bug in the software failed to record Spanish-language ballots. Pundits are already clamoring for a recount potentially larger than that of 2000.
But this time, there will be no hanging chads to contend with. In fact, for hundreds of thousands of votes, there will be no paper record at all. Ballots cast on many of the new touch-screen machines disappeared into computer memory or onto smartcards, leaving behind no paper trail to audit. Officials can print the results that have been saved in the machines, but there’s no way to know if that’s an accurate reflection of the votes that people actually cast. Adding to the chaos, one network news reporter has received a tip that mercenary hackers were hired to alter the code of a particular brand of machine so that every 10th vote for Candidate A was recorded as a vote for Candidate B. Meanwhile, in Colorado, another group of hackers is boasting that they stole a box of electronic smartcards used to activate e-voting machines and reprogrammed them to allow multiple votes, just for fun—the way someone might hack a videogame. Or, in 2004, a presidential election.
This is a worst-case scenario, but it’s not a fairy tale. When one third of the country’s voters walk into booths containing electronic voting machines this November, many of them will have no idea if their vote is being recorded accurately or if it is being lost to malfunction or fraud. “I don’t think the technology exists to make entirely trustworthy [electronic] voting systems,” says Stanford University computer scientist and e-voting expert David Dill.
Why? Put simply, e-voting machines are computers, and as we well know, computers sometimes fail. When those failures occur, there is often only the highly fallible digital record to rely on, because adding a paper trail was deemed too expensive or unnecessary. Several grassroots groups are working to prevent an election-swinging symphony of disasters like the scenario above, but there’s just no way to fix, test, and secure all of the tens of thousands of e-voting machines in time. This presidential election may well be a crapshoot.
Welcome to the age of high-tech voting.
From Bad Paper to No Paper
Ironically, it was the ambiguity of the old-fashioned paper trail that forced officials to put their trust in electronic machines. After the 2000 election hung literally by a chad, Congress passed the 2002 Help America Vote Act (HAVA). It included a $3.9-billion payout to improve the country’s voting infrastructure, with most of that aimed directly at converting those pesky punch-card devices into shiny new e-voting machines. The catch: States that wanted a piece of the pie would have to upgrade before 2006. Historically accustomed to a chronic lack of funding, state elections officials were eager to bring the voting process into the 21st century. “There was a mad rush to go to [e-voting machines] in the wake of HAVA,” says computer scientist Michael Shamos of Carnegie Mellon University. “But people didn’t know the machines. They didn’t have a clue.”
When local officials turned to the government for guidance, they were met with bureaucratic buck-passing. HAVA outlined no technical guidelines for the new machines. Instead it created a body called the Elections Assistance Commission (EAC) to do that. In June of this year, the EAC created another group, the Technical Guidelines Development Committee (TGDC), which is just beginning its efforts—the group met once in July and once in September. Election officials wound up with a lot of acronyms, a pocketful of money, and no advice on how to spend it.
Without a formal process for vetting machines, officials in nearly 30 states were charmed by presentations from top
e-voting vendors such as Diebold, ES&S, Sequoia and Hart InterCivic, which were happy to step in and provide their own brand of expertise. For them, HAVA meant sales.
The most popular e-voting machines are called direct recording electronic devices, or DREs. They usually look like large flat-screen computer monitors, sometimes with a few navigation buttons along the side. Eager to get away from the confusing “butterfly ballots” of 2000, officials were excited by the idea of voting machines with easily readable touchscreens that anyone could use. What could be simpler than stepping into a voting booth and touching the places on
a screen where you want to make a check mark? On most models, there is even a big “vote” button to press to submit your ballot. The “direct” part of DRE means the votes go directly into the computer’s memory, but companies such as Diebold and ES&S assured state officials that their proprietary software and foolproof designs would make the election run smoothly, even without a paper trail.
In virtually every state, officials failed to invite outside technical experts to participate in the process of e-voting machine selection. (The voting rights group Texas Safe Voting Coalition discovered a videotape of a meeting of voting officials and Diebold representatives in which one confused official is heard saying, “I just want to make sure this machine can add.”) Without input from the tech-savvy, elections officials did what most consumers do when they go to an electronics store to shop for a new gadget: They listened to what the salespeople said and hoped they were telling the truth.
To be fair, the vendors hadn’t really lied about their systems; they had just neglected to mention that the machines could crash the same way a home PC does. But elections officials found this out soon enough. In 2002, ES&S’s popular iVotronic machines didn’t register 436 ballots in a North Carolina election, and the same machines failed to record more than 100 votes in this year’s Florida congressional race. In the morning hours of a primary election this past March, voters were turned away from 55 percent of voting sites in San Diego because of battery problems with Diebold machines. (The machines were back online within a few hours, but it’s impossible to know what proportion of the stymied would-be voters ultimately returned to cast a vote.) And in a New Jersey election in June, the vote-tabulation computer could not read data from smartcards that had recorded votes cast on Sequoia AVC Edge machines. When workers tried to read data off the cards, the system showed zeros.
Last year two computer scientists—Avi Rubin of Johns Hopkins University and Dan Wallach of Rice University—investigated the proprietary software code that runs Diebold’s best-selling AccuVote-TS machine, which had been naively posted to an insecure FTP site and subsequently
disseminated on the Web.
Rubin and Wallach found that there were no safety mechanisms in the software to prevent people from casting unlimited votes. Shocked, they wrote that the e-voting machine was “far below the most minimal security standards.” Later a state-funded Maryland study revealed that Diebold software was designed to run on a version of
Windows that was out of date and therefore extremely
vulnerable to security breaches. Earlier attempts to upgrade the machines to a newer version of Windows 2000 caused the Diebold software to crash.
What officials would have discovered if they had consulted experts like Rubin and Wallach earlier is that “computer security” involves more than keeping machines in a locked room (although it means that too). Software is deemed secure when it can keep doing its job correctly even when users feed it unexpected information or it’s under attack from a hacker or virus. Computer security specialists comb through line after line of code, checking for dangerous glitches—anything from poorly written commands that cause the machine to stop recording votes when it reaches a certain number, to an error that just makes the machine crash randomly.
In addition to being made reliable, software has to be protected from being altered by malicious hackers. What if somebody plugged another device into the e-voting machine and reprogrammed it to shift election results? In a truly secure system, it’s very difficult for someone to make changes without being detected.
Because none of the major vendors of e-voting machines release their code for security testing, states and counties are forced to trust vendors’ own assessments of their machines’ reliability. But the DRE companies have the same problem their clients do: cash flow. There’s not a fortune to be made in voting machines. The primary customers depend on government money, and even with HAVA, that’s in short supply. Independent security audits are notoriously expensive—auditing a single make of machine might run into the hundreds of thousands of dollars—and without customers or a federal mandate demanding them, the return on this kind of investment hasn’t been worth it.
To spare that expense, some have suggested that voting machine code be made “open source”—that is, available to anyone who wants to check it. (The theory behind open source is that the more people who test the code, the better it gets. OpenBSD, an open-source computer operating system, is widely agreed to be one of the more secure operating systems in the world and is used by several government organizations, including NASA.) So far, none of the major vendors has agreed to release its code to the public for fear of competitors stealing trade secrets.
Bev Harris, a voting rights advocate and an early critic of DREs, suggests that the current state of software insecurity has left us with a “black box” scenario: “You send your vote into a machine, but you don’t know what’s happened to it.”
There is one widely accepted and simple solution to the black-box problem, and it is already being employed by other computers
involved in high-stakes transactions, such as ATMs and credit-card readers: a paper receipt
that ensures an accurate physical record of your vote.
In e-voting-speak, this is known as a voter-verified paper audit trail (VVPAT) or, sometimes, the Mercuri method.
Popularized by and named after Harvard University research fellow and computer scientist Rebecca Mercuri, the VVPAT system requires DREs to include printers that produce a paper receipt under glass. Voters review their choices, hit “OK,” and the paper falls into a lockbox. After the election, officials have something tangible to count.
Earlier this year, Nevada Secretary of State Dean Heller made a deal with e-voting-machine vendor Sequoia to retrofit the state’s DREs with printers that produce a paper trail using the Mercuri method. (Most current DREs were built without printers to keep the machines’ cost down.) During tests, paper tallies revealed that votes cast in Spanish on two ballot measures had not been recorded into machine memory. Without a VVPAT, this may never have been found. The machines’ first statewide real-world use, in a September primary, was widely heralded as a success.
Another, less expensive remedy is to identify any problems in a machine’s code through extensive aftermarket testing by volunteers. “Logic and accuracy” tests take place a few days or weeks before an election. Typically, several machines throughout the county are chosen at random, and two volunteer testers (often a Democrat and a Republican) are assigned to each one. One votes, while the other notes how the tester voted and how the machine recorded the vote. The records are then compared for anomalies.
On Election Day, random e-voting machines are taken out of the polling places throughout the day and subjected to a more rigorous version of this test. This “parallel testing” ensures that the machines haven’t been tampered with since the last test or that hackers haven’t programmed them to behave differently only on Election Day.
These kinds of security measures are currently voluntary, but the forthcoming federal guidelines are expected to make them mandatory. The Institute of Electrical and Electronics Engineers (IEEE), a standards-making group for electrical engineering and computer sciences, has created a task force to come up with just such a guideline, which members of the EAC hope to adopt. But the IEEE effort is moving slowly because of internal bickering among the task force, which includes both voting rights activists and representatives of some of the largest voting-machine vendors in the country. At issue: whether a paper trail should be part of the mandate. If it is, vendors will have to recall existing machines to graft on printers, at significant expense to the company or the state.
Saving the 2004 Election
While officials argue about protocols, computer scientists, legislators and concerned citizens are doing what they can to avert a 2004 Election Day digital debacle.
California Secretary of State Kevin Shelley, in consultation with experts, came up with his own security guidelines, announcing in April that all voting machines in California must produce a paper audit trail or conform to 23 specific conditions before the presidential election. These include allowing the state access to the machines’ source code, as well as parallel testing.
California will also offer a “paper or plastic” choice at the polls. Voters will have the option of using a paper ballot instead of the e-voting machines. In other places where voting rights activists feel the machines are unreliable, they are encouraging people to vote using absentee ballots.
In addition, battles are being fought on the legal front. Eight Maryland citizens filed a suit seeking an injunction against their state’s use of Diebold machines, while several groups, including the Verified Voting Foundation (founded by Dill) and Citizens’ Alliance for Secure Elections, have argued that an Ohio suit challenging election security could be resolved if state officials would mandate the use of paper audit trails. The Texas Safe Voting Coalition has been working with the American Civil Liberties Union to force the Texas voting examiners board to open its meetings so that the public can monitor the choosing of e-voting machines. And two bills that would require e-voting machines to have paper audit trails are making their way through Congress.
On Election Day, a group called TechWatch, made up of computer scientists and other volunteers, will monitor
e-voting machines across the country. When trouble strikes—machines crash or suspiciously large numbers of votes for one candidate show up in a low-traffic polling place—the problem will be posted to the group’s Web site, and a TechWatcher will be dispatched to document the problem and attempt to fix it. After the polls close, the group will have a detailed picture of which machines failed and where security may have been breached. If there’s a recount, TechWatch can use this evidence to determine whether concerns about voting machine accuracy in particular areas are well founded.
These stopgap efforts are heartening, but according to Diebold-exposer Rubin, the inherent instability of the current crop of DREs makes it impossible to guarantee a smooth election. “At this point,” he says, “either the machines will work, or they won’t.”