A long form look on the Estonian iVoting experience and our thoughts on why it’s not feasible here at home.
Viewing entries tagged
To our elections official stakeholders, Chief Technology Officer John Sebes covers a point that seems to be popping up in discussions more and more. There seems to be some confusion about what "open source" means in the context of software used for election administration or voting. That's understandable, because some election I.T. folks, and some current vendors, may not be familiar with the prior usage of the term "open source" -- especially since it is now used in so many different ways to describe (variously), people, code, legal agreements, etc. So, John hopes to get our Stakeholders back to basics on this.
Now, let’s turn to Plouffe’s notion of “digital voting.” Honestly, that phrase is confusing and vague. We should know: it catalyzed our name change last year from Open Source Digital Voting Foundation (OSDV) to Open Source Election Technology Foundation (OSET).
David Plouffe, President Obama’s top political and campaign strategist and the mastermind behind the winning 2008 and 2012 campaigns, wrote a forward-looking op-ed [paywall] in the Wall Street Journal recently about the politics of the future and how they might look.
He touched on how technology will continue to change the way campaigns are conducted – more use of mobile devices, even holograms, and more micro-targeting at individuals. But he also mentioned how people might cast their votes in the future, and that is what caught our eye here at the TrustTheVote Project. There is a considerable chasm to cross between vision and reality.
Alaska's extension to its iVoting venture may have raised the interests of at least one journalist for one highly visible publication. When we were asked for our "take" on this form of iVoting, we thought that we should also comment here on this "northern exposed adventure." (apologies to those fans of the mid-90s wacky TV series of a similar name.) Alaska has been among the states that allow military and overseas voters to return marked absentee ballots digitally, starting with fax, then eMail, and then adding a web upload as a 3rd option. Focusing specifically on the web-upload option, the question was: "How is Alaska doing this, and how do their efforts square with common concerns about security, accessibility, Federal standards, testing, certification, and accreditation?"
In most cases, any voting system has to run that whole gauntlet through to accreditation by a state, in order for the voting system to be used in that state. To date, none of the iVoting products have even trying to run that gauntlet.
So, what Alaska is doing, with respect to security, certification, and host of other things is essentially: flying solo.
Their system has not gone through any certification program (State, Federal, or otherwise that we can tell); hasn't been tested by an accredited voting system test lab; and nobody knows how it does or doesn't meet federal requirements for security, accessibility, and other (voluntary) specifications and guidelines for voting systems.
In Alaska, they've "rolled their own" system. It's their right as a State to do so.
In Alaska, military voters have several options, and only one of them is the ability to go to a web site, indicate their choices for vote, and have their votes recorded electronically -- no actual paper ballot involved, no absentee ballot affidavit or signature needed. In contrast to the sign/scan/email method of return of absentee ballot and affidavit (used in Alaska and 20 other states), this is straight-up iVoting.
So what does their experience say about all the often-quoted challenges of iVoting? Well, of course in Alaska those challenges apply the same as anywhere else, and they are facing them all:
- insider threats;
- outsider hacking threats;
- physical security;
- personnel security; and
- data integrity (including that of the keys that underlie any use of cryptography)
In short, the Alaska iVoting solution faces all the challenges of digital banking and online commerce that every financial services industry titan and eCommerce giant spends big $ on every year (capital and expense), and yet still routinely suffer attacks and breaches.
Compared to the those technology titans of industry (Banking, Finance, Technology services, or even the Department of Defense), how well are Alaskan election administrators doing on their shoestring (by comparison) budget?
Good question. It's not subject to annual review (like banks' IT operations audit for SAS-70), so we don't know. That also is their right as a U.S. state. However, the fact that we don't know, does not debunkany of the common claims about these challenges. Rather, it simply says that in Alaska they took on the challenges (which are large) and the general public doesn't know much about how they're doing.
To get a feeling for risks involved, just consider one point, think about the handful of IT geeks who manage the iVoting servers where the votes are recorded and stored as bits on a disk. They arenot election officials, and they are no more entitled to stick their hands into paper ballots boxes than anybody else outside a county elections office. Yet, they have the ability (though not the authorization) to access those bits.
- Who are they?
- Does anybody really oversee their actions?
- Do they have remote access to the voting servers from anywhere on the planet?
- Using passwords that could be guessed?
- Who knows?
They're probably competent responsible people, but we don'tknow. Not knowing any of that, then every vote on those voting servers is actually a question mark -- and that's simply being intellectually honest.
Lastly, to get a feeling for the possible significance of this lack of knowledge, consider a situation in which Alaska's electoral college votes swing an election, or where Alaska's Senate race swings control of Congress (not far-fetched given Murkowski's close call back in 2010.)
When the margin of victory in Alaska, for an election result that effects the entire nation, is a low 4-digit number of votes, and the number of digital votes cast is similar, what does that mean?
It's quite possible that those many digital votes could be cast in the next Alaska Senate race. If the contest is that close again, think about the scrutiny those IT folks will get. Will they be evaluated any better than every banking data center investigated after a data breach? Any better than Target? Any better than Google or Adobe's IT management after having trade secrets stolen? Or any better than the operators of military unclassified systems that for years were penetrated through intrusion from hackers located in China who may likely have been supported by the Chinese Army or Intelligence groups?
Instead, they'll be lucky (we hope) like the Estonian iVoting administrators, when the OCSE visited back in 2011 to have a look at the Estonian system. Things didn't go so well. OCSE found that one guy could have undermined the whole system. Good news: it didn't happen. Cold comfort: that one guy didn't seem to have the opportunity -- most likely because he and his colleagues were busier than a one-armed paper hanger during the election, worrying about Russian hackers attacking again, after they had previously shut-down the whole country's Internet-connect government systems.
But so far, the current threat is remote, and it is still early days even for small scale usage of Alaska's iVoting option. But while the threat is still remote, it might be good for the public to see some more about what's "under the hood" and who's in charge of the engine -- that would be our idea of more transparency.
Wandering off the Main Point for a Few Paragraphs So, in closing I'm going to run the risk of being a little preachy here (signaled by that faux HTML tag above); again, probably due to the surge in media inquiries recently about how the Millennial generation intends to cast their ballots one day. Lock and load.
I (and all of us here) are all for advancing the hallmarks of the Millennial mandates of the digital age: ease and convenience. I am also keenly aware there are wing-nuts looking for their Andy Warhol moment. And whether enticed by some anarchist rhetoric, their own reality distortion field, or most insidious: the evangelism of a terrorist agenda (domestic or foreign) ...said wing nut(s) -- perhaps just for grins and giggles -- might see an opportunity to derail an election (see my point above about a close race that swings control of Congress or worse).
Here's the deep concern: I'm one of those who believes that the horrific attacks of 9.11 had little to do with body count or the implosions of western icons of financial might. The real underlying agenda was to determine whether it might be possible to cause a temblor of sufficient magnitude to take world financial markets seriously off-line, and whether doing so might cause a rippling effect of chaos in world markets, and what disruption and destruction that might wreak. If we believe that, then consider the opportunity for disruption of the operational continuity of our democracy.
Its not that we are Internet haters: we're not -- several of us came from Netscape and other technology companies that helped pioneer the commercialization of that amazing government and academic experiment we call the Internet. Its just that THIS Internet and its current architecture simply was not designed to be inherently secure or to ensure anyone's absolute privacy (and strengthening one necessarily means weakening the other.)
So, while we're all focused on ease and convenience, and we live in an increasingly distributed democracy, and the Internet cloud is darkening the doorstep of literally every aspect of society (and now government too), great care must be taken as legislatures rush to enact new laws and regulations to enable studies, or build so-called pilots, or simply advance the Millennial agenda to make voting a smartphone experience. We must be very careful and considerably vigilant, because its not beyond the realm of reality that some wing-nut is watching, cracking their knuckles in front of their screen and keyboard, mumbling, "Oh please. Oh please."
Alaska has the right to venture down its own path in the northern territory, but it does so exposing an attack surface. They need not (indeed, cannot) see this enemy from their back porch (I really can't say of others). But just because it cannot be identified at the moment, doesn't mean it isn't there.
One other small point: As a research and education non-profit we're asked why shouldn't we be "working on making Internet voting possible?" Answer: Perhaps in due time. We do believe that on the horizon responsible research must be undertaken to determine how we can offer an additional alternative by digital means to casting a ballot next to absentee and polling place experiences. And that "digital means" might be over the public packet-switched network. Or maybe some other type of network. We'll get there. But candidly, our charge for the next couple of years is to update an outdated architecture of existing voting machinery and elections systems and bring about substantial, but still incremental innovation that jurisdictions can afford to adopt, adapt and deploy. We're taking one thing at a time and first things first; or as our former CEO at Netscape used to say, we're going to "keep the main thing, the main thing."
I've spent a fair bit of time over the last few days digesting a broad range of media responses to last week's election's operation, much it reaction to President Obama's "we've got to fix that" comment in his acceptance speech. There's a lot of complaining about the long lines, for example, demands for explanation of them, or ideas for preventing them in te future -- and similar for the difficulty that some states and counties face for finishing the process of counting the ballots. It's a healthy discussion for the most part, but one that makes me sad because it mostly misses the main point: the root cause of most election dysfunction. I can explain that briefly from my viewpoint, and back that up with several recent events. The plain unvarnished truth is that U.S. local election officials, taken all together as the collective group that operates U.S. federal and state elections, simply do not have the resources and infrastructure to conduct elections that
- have large turnout and close margins, preceded by much voter registration activity;
- are performed with transparency that supports public trust in the integrity of the election being accessible, fair, and accurate.
There are longstanding gaps in the resources needed, ranging from ongoing budget for sufficient staff, to inadequate technology for election administration, voting, counting, and reporting.
Of course in any given election, there are local elections operations that proceed smoothly, with adequate resources and physical and technical infrastructure. But we've seen again and again, that in every "big" election, there is a shifting cast of distressed states or localities (and a few regulars), where adminstrative snafus, technology glitches, resource limits, and other factors get magnified as a result of high participation and close margins. Recent remarks by Broward County, FL, election officials -- among those with the most experience in these matters -- really crystalized it for me. When asked about the cause of the long lines, their response (my paraphrase) is that when the election is important, people are very interested in the election, and show up in large numbers to vote.
That may sound like a trivial or obvious response, but consider it just a moment more. Another way of saying it is that their resources, infrastructure, and practices have been designed to be sufficient only for the majority of elections that have less than 50% turnout and few if any state or federal contests that are close. When those "normal parameters" are exceeded, the whole machinery of elections starts grinding down to a snail's pace. The result: an election that is, or appears to be, not what we expect in terms of being visibily fair, accessible, accurate, and therefore trustworthy.
In other words, we just haven't given our thousands of localities of election officials what they really need to collectively conduct a larger-than-usual, hotly contested election, with the excellence that they are required to deliver, but are not able to. Election excellence is, as much as any of several other important factors, a matter of resources and infrastructure. If we could somehow fill this gap in infrastructure, and provide sufficient funding and staff to use it, then there would be enormous public benefits: elections that are high-integrity and demonstrably trustworthy, despite being large-scale and close.
That's my opinion anyway, but let me try to back it up with some specific and recent observations about specific parts of the infrastructure gap, and then how each might be bridged.
- One type of infrastructure is voter record systems. This year in Ohio, the state voter record system poorly served many LEOs who searched for but didn't find many many registered absentee voters to whom they should have mailed absentee ballots. The result was a quarter million voters forced into provisional voting -- where unlike casting a ballot in a polling place, there is no guarantee that the ballot will be counted -- and many long days of effort for LEOs to sort through them all. If the early, absentee, and election night presidential voting in Ohio had been closer, we would still be waiting to hear from Ohio.
- Another type of infrastucture is pollbooks -- both paper, and electronic -- and the systems that prepare them for an election. As usual in any big election, we have lots of media anecdotes about people who had been on these voter rolls, but weren't on election day (that includes me by the way). Every one of these instances slows down the line, causes provisional voting (which also takes extra time compared to regular voting), and contributes to long lines.
- Then there are the voting machines. For the set of places where voting depends on electronic voting machines, there are always some places where the machines don't start, take too long get started, break, or don't work right. By now you've probably seen the viral youtube video of the touch screen that just wouldn't record the right vote. That's just emblematic of the larger situation of unreliable, aging voting systems, used by LEOs who are stuck with what they've got, and no funding to try to get anything better. The result: late poll opening, insufficient machines, long lines.
- And for some types of voting machines -- those that are completely paperless -- there is simply no way to do a recount, if one is required.
- In other places, paper ballots and optical scanners are the norm, but they have problems too. This year in Florida, some ballots were huge! six pages in many cases. The older scanning machines physically couldn't handle the increased volume. That's bad but not terrible; at least people can vote. However, there are still integrity requirements -- for example, the voters needs to put their unscanned ballots in an emergency ballot box, rather than entrust a marked ballot to a poll worker. But those crazy huge ballots, combined with the frequent scanner malfunction, created overstuffed full emergency ballot boxes, and poll workers trying to improvise a way store them. Result: more delays in the time each voter required, and a real threat to the secret ballot and to every ballot being counted.
Really, I could go on for more and more of the infrastructure elements that in this election had many examples of dysfunction. But I expect that you've seen plenty already. But why, you ask, why is the infrastructure so inadequate to the task of a big, complicated, close election conducted with accessibility, accuracy, security, transparency, and earning public trust? Isn't there something better?
The sad answer, for the most part, is not at present. Thought leaders among local election officials -- in Los Angeles and Austin just to name a couple -- are on record that current voting system offerings just don't meet their needs. And the vendors of these systems don't have the ability to innovate and meet those needs. The vendors are struggling to keep up a decent business, and don't see the type of large market with ample budgets that would be a business justification for new systems and the burdensome regulatory process to get them to market.
In other cases, most notably with voter records systems, there simply aren't products anymore, and many localities and states are stuck with expensive-to-maintain legacy systems that were built years ago by big system integrators, that have no flexibility to adapt to changes in election administration, law, or regulation, and that are too expensive to replace.
So much complaining! Can't we do anything about it? Yes. Every one of those and other parts of election infrastructure breakdowns or gaps can be improved, and could, if taken together, provide immense public benefit if state and local election officials could use those improvements. But where can they come from? Especially if the current market hasn't provided, despite a decade of efforts and much federal funding? Longtime readers know the answer: by election technology development that is outside of the current market, breaks the mold, and leverages recent changes in information technology, and the business of information technology. Our blog in the coming weeks will have several examples of what we've done to help, and what we're planning next.
But for today, let me be brief with one example, and details on it later. We've worked with state of Virginia to build one part of new infrastructure for voter registration, and voter record lookup, and reporting, that meets existing needs and offers needed additions that the older systems don't have. The VA state board of elections (SBE) doesn't pay any licensing fees to use this technology -- that's part of what open source is about. The don't have to acquire the software and deploy it in their datacenter, and pay additional (and expensive) fees to their legacy datacenter operator, a government systems integrator. They don't have to go back to the vendor of the old system to pay for expensive but small and important upgrades in functionality to meet new election laws or regulations.
Instead, the SBE contracts with a cloud services provider, who can -- for a fraction of the costs in a legacy in-house government datacenter operated by a GSI -- obtain the open-source software, integrate it with the hosting provider's standard hosting systems, test, deploy, operate, and monitor the system. And the SBE can also contract with anyone they choose, to create new extensions to the system, with competition for who can provide the best service to create them. The public benefits because people anywhere and anytime can check if they are registered to vote, or should get an absentee ballot, and not wait like in Ohio until election day to find out that they are one in a quarter million people with a problem.
And then the finale, of course, is that other states can also adopt this new voter records public portal, by doing a similar engagement with that same cloud hosting provider, or any other provider of their choice that supports similar cloud technology. Virginia's investment in this new election technology is fine for Virginia, but can also be leveraged by other states and localities.
After many months of work on this and other new election technologies put into practical use, we have many more stories to tell, and more detail to provide. But I think that if you follow along and see the steps so far, you may just see a path towards these election infrastructure gaps getting bridged, and flexibly enough to stay bridged. It's not a short path, but the benefits could be great: elections where LEOs have the infrastructure to work with excellence in demanding situations, and can tangibly show the public that they can trust the election as having been accessible to all who are eligible to vote, performed with integrity, and yielding an accurate result.
In a recent posting, I recalled the old-fashioned traditional proprietary-IT-think of vendors leveraging their proprietary data for their customers, and contrasted that with election technology where the data is public. In the "open data" approach, you do not need to have integrated reporting features as part of a voting system or election management system. Instead, you can choose your own reporting system, hook it up to your open database of election data, and mine that data for whatever reports you want. And if you need help, only a few days of a reporting-systems consultant can get you set up quite quickly. The same applies to what we used to call "ad hoc querying" in the olden enterprise IT days, and now might be "data mining". Well, every report is the result doing one or more database queries, and formatting the results. When you can do ad hoc creation of new report template, then an ad hoc query is really just a new report. With the open-data approach, there is no need to buy any additional "modules" from a voting system vendor in order to be able to do querying, reporting, or data mining. Instead, you have ready access to the data with whatever purpose-built tools you choose.
Today, I want to underline that point as applied to mobility, that is, the use of apps on mobile devices (tablets, smart phones, etc.) to access useful information in a quick and handy on-the-go small-screen form factor. Nowadays, lots of folks want "an app for that" and election officials would like to be able to provide. But the options are not so good. A proprietary system vendor may have an app, but it might not be what you had in mind; and you can't alter it. You might get a friendly government System Integrator to crack open your proprietary voting system data and write some apps for you, but that is not a cheap route, either.
What, in contrast, is the open route? It might seem a detour to get you where you want to go, but consider this. With open data, there is no constraint on how you use it, or what you use it with. If you use an election management system that has a Web services API, you can publish all that data to the whole world in a way that anyone's software can access it-- including mobile apps-- including all the data, not just what happens to be available in proprietary product's Web interface. That's not just open-source and "open data" but also "complete data."
Then for some basic apps, you can get friendly open-gov techies to make something simple but effective for starters, and make the app open source. From there on out, it is up to the ingenuity of the tens of thousands of mobile app tinkerers and good government groups (for an example, read about one of them here, and then try it the app yourself) to come up great ideas about how to present the data -- and the more options there are, the more election data, the public's data, gets used for the public good.
I hope that that picture sounds more appealing than closed systems. But to re-wind to Proprietary Election Technology Vendors' (PETV) offerings to Local Election Officials (LEO), consider this dialogue as the alternative to "open data, complete data."
LEO: I'd like to get an election data management solution with flexible reporting, ad hoc querying, a management dashboard, a nifty graphical public Web interface, and some mobile apps.
PETV: Sure, we can provide it. We have most of that off the shelf, and we can do some customization work and professional services to tailor it to your needs. Just guessing from you asked for, that will be $X for the software license, $Y per year for support, $Z for the customization work, and we'll need to talk about yearly support for the custom stuff.
LEO: Hmmm. Too much for me. Bummer.
PETV: Well, maybe we can cut you a special deal, especially if you lower your sights on that customization stuff.
LEO: Hmmm. Then I'm not really getting all I asked for, but I am getting something I can afford. ... But will you all crack open your product's database with a Web services API so that anybody can write a mobile app for it, for any mobile device in the world?
PETV: Wow! That would be some major customization. I think you'll find our mobile app is just fine.
LEO: What about cracking open the database so I can use my choice of reporting tools?
PETV: Ah, no, actually, and I think you'll find our reporting features are really great.
I'll stop the dialogue (now getting painful to listen to) and actually stop altogether for today, leaving the reader to contrast it with the open-data, complete-data approach of an open election data management system with core functions and features, basic reporting, basic mobility, and above all the open-ness for anyone to data-mine or mobilize the election data that is, in fact, the people's information.
Continuing our Bedrock election story (see parts one, two, and three if you need to catch up), we find the County of Bedrock Board of Elections staff, including design guru Dana Chisel, in the "ballot design studio," a dusty back room of the BBoE. Chisels in hand, staffers ponder the blank slate, or rather sandstone, of sample ballot slabs on easels. With the candidate and referendum filing periods closed and the election only a couple weeks away, it's time to make the ballots.
Now, you might think that the ballot consists of the 3 items we know of - the race for Mayor, the race for Quarry Commission, and the question on the quarry fee. However, recall that each precinct in Bedrock County has a distinct set of districts. In this election, each precinct has a distinct ballot with a distinct set of contests corresponding to the districts that the precinct is part of. At a first cut, the contests by precinct are:
- Downtown-001: the contest for mayor, and the referendum on quarry fees;
- Quarrytown-002: the contests for mayor and quarry commissioner, and the referendum on quarry fees;
- QuarryCounty-003: the contest for quarry commissioner, and the referendum on quarry fees;
- County-004: the referendum on quarry fees.
You'll note that only Town residents -- in Precincts 1 or 2 -- are entitled to vote for mayor, while residents of the Mineral District -- in Precincts 2 or 4 -- are the only voters entitled to for Quarry Commissioner. Last, all voters in the county are eligible to vote on county revenue issues such as taxes and fees imposed by the county.
That, plus the list of candidates and the text of the referendum, comprise what might be called the content of each of the 4 ballots, or the ballot configuration. But the ballots themselves need to be designed: the ballot items have to appear in some order, and the candidates likewise; the ballot items have to be arranged in some visual design, vertically or horizontally, with sufficient space between each, fitting the size of ballot slates that they will be etched on … and so on.
So, armed with chisels, the proverbial blank slate, and several tablets stating the legal requirements for contest and candidate order, design guru Dana Chisel marks out a prototype ballot containing all the requisite ballot content, laid out according to usability principles known since the Stone Age (left justified text, instructions separate from content, instructions with simple words along with pictures, and more). After a few tries and consultation with their boss Rocky, they have a design model for each of the 4 ballots. The next step are usability testing with volunteer voters, and using the results to create the final slabs that serve as the model for each ballot style. Then they're ready for mass reproduction of ballots for the upcoming election -- get those duplidactylsaurs into action!
Now, you might think that they're ready for election day, but wait there's more, including the preparation of pollbooks, and then early voting, and then eventually election day operations.
Next time: Pollbooks and Early Voting
At the end of our last visit to the fictional Town of Bedrock, we left Fred as he applied to run for mayor. Now we'll continue the story, but with a focus on Bedrock itself, in order to continue building up a detailed, yet simplified, account of actual U.S. election practice. The focus is on Bedrock rather than its colorful denizens, because the answer to the current question -- can Fred be a candidate for mayor in the upcoming election? -- lies partly in the details of Cobblestone County and Town of Bedrock, how they are structured and administered for elections. At a first glance of the Bedrock County map, you'll see that the Town of Bedrock is entirely in Cobblestone County, dividing the county into two regions, the part that is incorporated in the Town, and the unincorporated portion.
Look a bit more closely though, and you'll see the Mineral District -- not a town but a political division called an electoral district (in some states in the U.S., called a jurisdiction rather than a district). The Mineral District in the part of the county that's affected by quarrying operations at the Bedrock Quarry, and the Bedrockites who live there get to elect the Quarry Commission to regulate the Quarry. Look a bit more carefully and you'll notice that part of the Mineral District is in the Town of Bedrock, and the rest is in the unincorporated county.
To keep our Election Tale simple, that's almost all of the electoral structure of Cobblestone County that is the jurisdiction of the Bedrock BoE. The remaining part may be a bit more familiar: the precincts. Each precinct is a region in which all of the voters are entitled to vote on exactly the same ballot items; put another way, in one precinct all of the voters reside in exact same set of electoral districts. So in Bedrock County, there are 4 precincts:
- The "Downtown-001" precinct, part of two districts: the district of the Town of Bedrock, and the district for Bedrock County;
- The "Quarrytown-002" precinct, part of those same two districts, plus the Mineral District;
- The "QuarryCounty-003" precinct, part of the Mineral District and the County;
- The "County-004" precinct, part of just the district for the County.
Looking a little more carefully, you'll notice the Flintstone residence is in the QuarryTown-002 precinct, which means the Flintstones (or at least those of them that are registered voters) are eligible to run for offices in either the Town or the Quarry District. To say that more generally, in order to be eligible to run for an office, you have to reside in the district that the office is part of. Fred wants to run for Mayor of the Town of Bedrock, so he has to reside in the Town of Bedrock.
Back at the BBoE, Rocky has completed the eligibility check for Fred, having ensured that:
- he resides in the Town of Bedrock,
- he is registered to vote,
- his current address matches the address in his voter record,
- he is not serving jail time,
and perhaps some other eligibility requirements in Stone Age election law that we are not aware of. Fred is satisfied to find that on the Bedrock slab-site's Upcoming Election slab, he is listed as a candidate for mayor. However, there is also a bit of a surprise: his neighbor Betty Rubble is running against him! And also Barney Rubble is running for Fred's old Quarry Commission seat. Also, the commission's clerical errors seem to have been resolved, and the quarry fee referendum will be on the ballot. With a few more days of filing time left, an irritated Fred ponders who lives in the Mineral District, that might be convinced to run against Barney.
Next Time: it's time for ballot design - get out your Chisel!
Thanks to some excellent recent presentations by EAC folks, we have today a pleasant surprise of an update to our recent blogs Voting System Decertification: A Way Forward (in Part 1 and Part 2). As you might imagine with a government-run test and certification program, there is an enormous amount of detail (much of it publicly available on the EAC web site!) but Greg and I have boiled it down to a handful of point/counterpoints. Very short version: EAC seems to be doing a fine job, both in the test/certification/monitoring roles, and in public communication about it. At the risk of oversimplifying down to 3 points, here goes: 1. Perverse Incentive
Concern: ES&S's Unity 126.96.36.199 would be de-certified as a result of EAC's investigation into functional irregularities documented in Cuyahoga County, Ohio, by erstwhile elections direction Jane Platten (Kudos to Cuyahoga). With the more recent product 188.8.131.52 just certified, the "fix" might be for customers of 184.108.40.206 to upgrade to the latest version, with unexpected time and unbudgeted upgrade expense to customers, including license fees. If so, then the product defect, combined with de-certification, would actually benefit the vendor by acting to spur customers toward paid upgrades. Update: Diligent work at EAC and ES&S has resulted in in ES&S providing an in-place fix to its 220.127.116.11 product, so that EAC doesn't have to de-certify the product, and customers don't have to upgrade. In fact, one recent result of EAC's work with Cuyahoga County, the county was able to get money back from the vendor because of the issues identified.
Next Steps: We'll be waiting to hear whether the fix is provided at ES&S's expense (or at least no cost to customers), as it appears may be the case. We'll also be watching with interest the process in which version 18.104.22.168+ fix goes through the test and certification process to get legal for real use in elections. As longtime readers know, we've stressed the importance of the emergence of a timely re-certification process for products that have been certified, need a field update, and need the previously used test lab to test the updated system with testing that is as rigorous as the first time, but less costly and more timely.
2. Broken Market
Concern: This situation may be an illustration of the untenable nature of of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to forego revenue when otherwise for-pay upgrades are required because of defects in software.
Update: By working with the vendor and their test lab on both the earlier and current versions of the product, all customers will be able to obtain a no-fee update of their existing product version, rather than being required to do a for-fee upgrade to a later product version. Therefore, the "who pays for the upgrade?" question applies only to those customers who actually want to to pay for the latest version.
Next Steps: Thanks to the EAC's new process of publishing timelines for all product evaluation versions, it should be possible to compare the timeframe for the original 22.214.171.124 testing, the more recent 126.96.36.199 testing, and the testing of the bug fixed version of 188.8.131.52. We can hope that this case demonstrates that a re-certification process can indeed be equally rigorous, less costly, and more timely.
3. Lengthy Testing and Certfication
Concern: The whole certification testing process costs millions and takes years for these huge voting system products of several components and dozens of modules of software. How could a re-test really work at a tiny fraction of a fraction of that time and cost?
Update: Again, thanks to publishing those timelines, and with experience of recent certification tests, we can see the progress that EAC is making towards their goal that an end-to-end testing campaign of a system to be less than 9 months and a million dollars, perhaps even quarter or a third less. The key, of course, is that a system be ready for testing. As we've seen with some of the older systems that simply weren't designed to meet current standards, and weren't engineered with a rigorous and documented Q&A process that could be disclosed to a test lab to build on, well, it can be a lengthy process -- or even one that a vendor withdraws from in order to go back and do some re-engineering before trying again.
Next Steps: A key part of this time/cost reduction is EAC's guidance to vendors on readiness for testing. That guidance is another relatively recent improvement by EAC. We can hope for some public information in future about how the readiness assessment has worked, and how it helped a test process get started right. But even better, EAC folks have again repeated a further goal for time/cost reduction, but moving to voting system component certification, rather than certifying the whole enchilada - or perhaps I should say, a whole enchilada, rather than the whole plato gordo of enchilada, quesadillas, and chile relleno, together with the the EMS for it all with its many parts - rice, frijoles, pico de gallo, fresh guacamole ... (I detect an over-indlugence in metaphor here!)
One More Thing: As we've said before, we think that component level testing and re-testing is the Big Step to get whole certification scheme into a shape that really serves its real customers - election officials and the voting public. And we're proud to jump in and try it out ourselves -- work with EAC, use the readiness assessment ourselves, do a pilot test cycle, and see what can be learned about how that Big Step might actually work in the future.
Today, we'll continue our illustrative story of elections -- and as in the first installment of the story, we'll keep it simple with the setting in the Town of Bedrock. As we tune in, we find Fred Flintstone in downtown Bedrock at the offices of Cobblestone County's Bedrock Board of Elections (BBoE). He's checking up on the rumor that Mayor Flint Eastrock has resigned, and that there is a Special Mayoral Election scheduled. Asking BBoE staffer Rocky Stonerman, Rocky replies, "Of course Fred! Just check the BBoE's public slab-site." Going back outside the BBoE offices, he checks the public slab-site, and sure enough there is a newly posted slab announcing the election. Fred tells Rocky he'd like to run, and Rocky explains how Fred needs to apply as a candidate, and what the eligibility rules are. "Fred, I'll tell you straight up, don't bother to fill out the application, you're not eligible because you're a Quarry Commissioner. If you want to run for Mayor, you'll need to resign first, and then apply as a mayoral candidate."
"Yabba dabba doo - that's what I'm here to do!" Forms and formalities of resignation then taken care of, Rocky gives Fred an application slab and chisel, and then grabs a chisel and runs out to update the Upcoming Elections slab page to include information about the contest for Quarry Commission, Seat #2. In the meantime, Fred has finished his application, and hands it in when Rocky returns. "Did you put me on the candidate list?"
"Of course not, Fred. We have to process your application! Best bet is to come down tomorrow -- I'm going to have to pull your voter record from our voter record tablet-base system. And I've got to tell you, it'll take a while -- the VRTB has thousands of records. We're still running on unsupported old ScryBase system! Wish we had funding to upgrade but not so far. Petro tells me the we should look at an open-stone MyScryql system, and …"
Not so interested in Rocky and Petro's slabs and tablets, Fred interrupts, "And what about that referendum?" Rocky replies, "Oh yes! The Quarry Courier brought over an application yesterday, but it didn't have all the commissioners' signatures on the application. You probably want to get the commission to fix that -- unless you want to go the petition route, though you'd need 300 signatures and frankly I don't know if our TBMS has room, because …"
Having heard more than enough about stone-age election technology for one day, Fred beats a hasty retreat. Tune in for the next installment, to find out if Fred actually gets to run for mayor.
You'll often find the term "open source" here, used to describe either the source code for software, or the license that allows you take that source code and use it. But "open data" is just as important. A recent New York Times article read almost like I would have said it, starting with "It's not boring, really!" or to be precise, the title "This Data Isn’t Dull. It Improves Lives". NYT's Richard H. Thaler starts on exactly the right point:
Governments have learned a cheap new way to improve people’s lives. Here is the basic recipe: Take data that you and I have already paid a government agency to collect, and post it online in a way that computer programmers can easily use. Then wait a few months. Voilà! The private sector gets busy, creating Web sites and smartphone apps that reformat the information in ways that are helpful to consumers, workers and companies.
That's exactly the approach to election open-data that we're taking in the next steps of election data management at TrustTheVote. Right now, I have to admit that the current set of election data might actually be fairly boring unless you have an interest in ballot proofing ot electoral districting (which we'll get to in a couple more installments in our Bedrock series). But the next step might be more interesting: combining that data with election-result information, which up to now we've managed only in the context of the TTV Tabulator and some common data formats that we're working on with some help from EAC and NIST.
But by adding election result data back into the election definition data, we get the the next cool part: a new TTV component that is like the current Election Manager (which would remain deployed privately within a BoE or state), but with only the ability to publicly provide election and election result data via a Web services API. That, in turn, becomes the back end for an election night reporting system Web site and smartphone app.
But perhaps just as important, that API would be publicly accessible to any software, including 3rd party sites and apps, as Mr. Thaler points out. Right now, most election definition data and result data is locked up in the EMS of proprietary voting system products, with a sliver of it published in human-oriented reports and sometimes web content. A new TTV component for Web publication of that information would be a fine first step, but by itself, the data would still be limited to availability in whatever form (however broad) the human-oriented Web interface provides. The really key point, instead, is this:
Not only publish via a Web site, but also make all the data accessible, so anyone can do their own thing to slice and dice the data to gain confidence that the election results are right.
And that point -- confidence -- is where we rendezvous with the NYT's point about data improving people's lives. I'm not sure that open-data for elections can save lives, but I think it can help save some people's faith in election integrity. And now is certainly a trying time, with a variety of election-related litigation news from NY and CO and IN and SC, all seeming to say that election irregularities or outright fraud -- even at the top with IN's highest election official being indicted -- is all over the country.
There's an old saying "one bad apple doesn't spoil the whole barrel" but we should not have to take it on faith for the large barrel of honest diligent election officials and the valid results of their well-run elections. A bit of open-data might actually help.
Yesterday I wrote about the latest sign of the downward spiral of the broken market in which U.S. local election officials (LEOs) purchase product and support from vendors of proprietary voting system products, monolithic technology the result of years' worth of accretion, and costing years and millions to test and certify for use -- including a current case where the process didn't catch flaws that may result in a certified product being de-certified, and being replaced by a newer system, to the cost of LEOs. Ouch! But could you really expect a vendor in this miserable market to give away new product that they spent years and tens of millions develop, to every customer of the old product, who the vendor had planned to sell upgrades to? -- just because of flaws in the old product? But the situation is actually worse: LEOs don't actually have the funding to acquire a hypothetical future voting system product in which the vendor was fully open about true costs including
(a) certification costs both direct (fees to VSTLs) and indirect cost (staff time), as well as
(b) costs of development including rigorously designed and documented testing.
Actually, development costs alone are bad enough, but certification costs make it much worse -- as well as creating a huge barrier to entry of anyone foolhardy enough to try to enter the market (or even stay in it!) and make a profit.
A Way Forward?
That double-whammy is why I and my colleagues at OSDV are so passionate about working to reform the certification process, so that individual components can be certified for far less time and money than a mess o'code accreted over decades, and including wads of interwoven functionality that might need even need to be certified! And then of course, these individual components could also be re-certfied for bug fixes by re-running a durable test plan that the VSTL created the first time around. And that of course requires common data formats for inter-operation between components -- for example, between a PCOS device and a Tabulator system that combines and cross checks all the PCOS devices' outputs, in order to either find errors/omissions or find a complete election result.
So once again our appreciation to NIST, EAC, IEEE 1622 for actually doing the detailed work of hashing out these common data formats, which is the bedrock of inter-operation, which is the pre-req for certification reform, which enables certification cost reduction of certification, which might result in voting system component products being available at true costs that are affordable to the LEOs who buy and use them.
Yet's that's quite a stretch, from data standards committee work, to a less broken market that might be able to deliver to customers at reasonable cost. But to replace a rickety old structure with a new, solid, durable one, you have to start at the bedrock, and that's where we're working now.
PS: Thanks again to Joe Hall for pointing out that the current potential de-certification and mandatory upgrade scenario (described in Part 1) illustrates the untenable nature of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to (as some have suggested) forego revenue when otherwise for-pay upgrades are required because of defects in software.
Long-time readers will certainly recall our view that the market for U.S. voting systems is fundamentally broken. Recent news provides another illustration of the downward spiral: the likely de-certification of a widely used voting system product from the vendor that owns almost three quarters of the U.S. market. The current stage of the story is that the U.S. Election Assistance Commission is formally investigating the product for serious flaws that led to errors of the kind seen in several places in 2010, and perhaps best documented in Cuyahoga County. (See: "EAC Initiates Formal Investigation into ES&S Unity 184.108.40.206 Voting System".) The likely end result is the product being de-certified, rendering it no longer legal for use in many states where it is currently deployed. Is this a problem for the vendor? Not really. The successor version of the product is due to emerge from a lengthy testing and certification process fairly soon. Having the current product banned is actually a great tool for migrating customers to the latest product!
But at what cost to who? The vendor will charge the customers (local election officials, or LEOs) for the new product, the same as would have been if the migration were voluntary and the old product version still legal. The LEOs will have to sign and pay for a multi-year service agreement. And they will have the same indirect costs of staff efforts (at the expense of other duties like running elections, or getting enough sleep to run an election correctly), and direct costs for shipping, transportation, storage, etc. These are real costs! (Example: I've heard reports of some under-funded election officials opting to not use election equipment that they already have, because they have no funding for the expense of taking out of the warehouse to testing facility, and doing the required pre-election testing.)
Some observers have opined that vendors of flawed voting system products should pay: whether damages, or fines, or doing the migration gratis, or something. But consider this deeper question, from UCB and Princeton's Joe Hall:
Can this market support a regulatory/business model where vendors can't charge for upgrades and have to absorb costs due to flaws that testing and certification didn't find? (And every software product, period, has them).
The funding for a high level of quality assurance has to come from somewhere, and that's not voting system customers right now. Perhaps we're getting to the point where the amount of effort it takes to produce a robust voting system and get it certified -- at the vendor's expense -- creates a cost that customers are not willing or able to pay when the product gets to market.
A good question! and one that illustrates the continuing downward spiral of this broken market. The cost to to vendors of certification is large, and you can't really blame a vendor for the sort of overly rapid development, marketing, and sales that leads to the problems being investigated. The folks are in this business to make a profit for heavens' sake, what else could we expect?
PS - Part Two, coming soon: a way out of the spiral.
As I said in my recent MLK posting, I'm starting a series of blogs that should provide a concrete example of election management, at a small scale and (I hope) with some interest value. But before I tell a story of election management, we need to first have a story of an election, and this particular election starts with a candidate.
So, let me tell you a little story about a man named Jed -- oops, sorry, a man named Fred*. Fred lives in the Town of Bedrock, and just heard that the famous mayor, Flint Eastrock, has just resigned, in order to start a new film project. Fred decides to run for mayor in the Special Mayoral Election, because he's ready for the big time, having served on the Quarry Commission for some years. Like modern-day U.S., Bedrockites prefer to elect as many government positions as possible; rather than trusting the Mayor or Bedrock City Council to appoint Quarry Commissioners, the 5 commissioners are elected. So, in the special election, Fred's open seat on the Commission will also be up for election.
Lastly, as Fred's last act as Commissioner before resigning to run for mayor, Fred proposes a new referendum about the Quarry: a question for the voters to approve or reject a new usage fee for quarrying -- some needed additional revenue for the quarry upgrade that he hopes to be the centerpiece of his tenure as mayor.
So, there we have an election coming up, with three ballot items:
- An open seat for mayor, which Fred wants to run for;
- An open seat on the Quarry Commissioner, from which Fred has resigned;
- A referendum on the new quarry usage fee.
That's almost enough for getting started on our Bedrock election story, but we've also seen a bit of Bedrock election law and election administration in action:
- When a the office of Mayor is vacant, it is filled by special election, not appointment or remaining vacant until the next regular election.
- The Bedrock Board of Election (BBoE) called a special election.
- If a current local office-holder wants to run for a vacant office, he or she must resign from the office they already hold.
- If there is a local referendum pending for the next election, and a special election is called, then the referendum is held during the special election.
Next time, Fred applies to be a candidate for mayor, and gets an earful about how the BBoE works in practice. Fred knows, as I do, that there is always more to learn in election-land!
* My thanks and apologies for David Pogue on this one.
Putting an open source application into service - or "deployment" - can be different from deploying proprietary software. What works, and what doesn't? That's a question that's come up several times in the few weeks, as the TTV team has been working hard on several proposals for new projects in 2011. Based on our experiences in 2009-10, here is what we've been saying about deployment of election technology that is not a core part of ballot certified casting/counting systems, but is part of the great range of other types of election technology: data management solutions for managing election definitions, candidates, voter registration, voter records, pollbooks and e-pollbooks, election results, and more - and reporting and publishing the data. For proprietary solutions - off the shelf, or with customization and professional services, or even purely custom applications like many voter record systems in use today - deployment is most often the responsibility of the vendor. The vendor puts the software into the environment chosen by the customer -- state or local election officials - ranging from the customer's IT plant to outsourced hosting to the vendor's offering of an managed service in an application-service-provider approach. All have distinct benefits, but share the drawback of "vendor lock-in."
What about open-source election software? There are several approaches that can work, depending the nature of the data being managed, and the level of complexity in the IT shop of the election officials. For today, here is one approach that has worked for us.
What works: outsourced hosting, where a system integrator (SI) manages outsourced hosting. For our 2010 project for VA's FVAP solution, the project was led by an SI that managed the solution development and deployment, providing outsourced application hosting and support. The open-source software included a custom Web front-end to existing open-source election data management software that was customized to VA's existing data formats for voters and ballots. This arrangement worked well because the people who developed the custom front-end software also performed the deployment on a system completely under their control. VA's UOCAVA voters benefited from the voter service blank-ballot distribution, while the VA state board of elections was involved mainly by consuming reports and statistics about the system's operation.
That model works, but not in every situation. In the VA case, this model also constrained the way that the blank ballot distribution system worked. In this case, the system did not contain personal private information -- VA-provided voter records were "scrubbed". As a result, it was OK for the system's limited database to reside in a commercial hosting center outside of the the direct control of election officials. The deployment approach was chosen first, and it constrained the nature of the Web application.
The constraint arose because the FVAP solution allowed voters to mark ballots digitally (before printing and returning by post or express mail). Therefore it was essential that the ballot-marking be performed solely on the voter's PC, which absolutely no visibility by the server software running in the commercial datacenter. Otherwise, each specific voter's choices would be visible to a commercial enterprise -- clearly violating ballot secrecy. The VA approach was a contrast to some other approaches in which a voter's choices were sent over the Internet to a server which prepared a ballot document for the voter. To put it another way …
What doesn't work: hosting of government-privileged data. In the case of the FVAP solution, this would have been outsourced hosting of a system that had visibility on the ultimate in election-related sensitive data: voters' ballot choices.
What works: engaged IT group. A final ingredient in this successful recipe was engagement of a robust IT organization at the state board of elections. The VA system was very data-intensive during setup, with large amounts of data from legacy systems. The involvement of VA SBE IT staff was essential to get the job done on the process of dumping the data, scrubbing and re-organizing it, checking it, and loading it into the FVAP solution -- and doing this several times as the project progressed to the point where voter and ballot data were fixed.
To sum up what worked:
- data that was OK to be outside direct control of government officials;
- government IT staff engaged in the project so that it was not a "transom toss" of legacy data;
- development and deployment managed by a government-oriented SI;
- deployment into a hosted environment that met the SI's exact specifications for hosting the data management system.
That recipe worked well in this case, and I think would apply quite well for other situations with the same characteristics. In other situations, other models can work. What are those other models, or recipes? Another day, another blog on another recipe.
Yesterday, judges in New York state were hearing calls for hand recount, while elsewhere other vote counts were being factored into the totals, and on the other side of the Atlantic, the same question "where are the election results?" was getting very serious. In the Ivory Coast, like in some places in the U.S., there is a very close election that still isn't decided. There, it's gotten serious, as the military closed off all points of entry into the country as a security measure related to unrest about the close election and lack of a winner. Such distrust and unrest, we are lucky to have avoided here; despite the relatively low levels of trust in U.S. electoral processes (less than half of eligible people vote, and of voters polled in years past a third to a half were negative or neutral), we are content to let courts and the election finalization process wind on for weeks. OK, so maybe not content, maybe extremely irate and litigious in some cases, but not burning cars in streets.
That's why I think it is particularly important that Americans better understand the election finalization process -- which of course like almost everything in U.S. elections varies by state or even locality. But the news from Queens NY (New York Times, "A Month After Elections, 200,000 Votes Found") though it sounds awful in headline, is actually enormously instructive -- especially about our hunger for instant results.
It's not awful; it's complicated. As the news story outlines, there is a complicated process on election night, with lots of room for human error after a 16 hour day. The finalization process is conducted over days or weeks to aggregate vote data and produce election results carefully, catching errors, though usually not changing preliminary election-night results. As Douglas A. Kellner, co-chairman of the State Board of Elections, said:
The unofficial election night returns reported by the press always have huge discrepancies — which is why neither the candidates or the election officials ever rely on them.
That's particularly true as NY has moved to paper optical scan voting from lever machines, and the finalization process has changed. But in the old days, it was possible to misplace one or a few lever machine's worth of vote totals with human errors in the paper process of reading dials, writing numbers on reporting form sheets, transporting the sheets, etc. Then, add to that the computer factor for human error, and you get your 80,000 vote variance in Queens.
Bottom line -- when an election is close, of course we want the accurate answer, and getting it right takes time. Using computerized voting systems certainly helps with getting quicker answers for contests that aren't close and won't change in the final count. And certainly they can help by enabling audits and recounts that lever machines could not. But for close calls, it's back to elbow grease and getting out the i-dotters and t-crossers -- and being thankful for their efforts.
More tabulator troubles! In addition to the continuing saga in New York with the tabulator troubles I wrote about earlier, now there is another tabulator-related situation in Colorado. The news report from Saguache County CO is about:
a Nov. 5 “retabulation” of votes cast in the Nov. 2 election Friday by Myers and staff, with results reversing the outcome ...
In brief, the situation is exactly about the "tabulation" part of election management, that I have been writing about. To recap:
- In polling places, there are counting devices that count up votes from ballots, and spit out a list of vote-counts for each candidate in each contest, and each option in each referendum. This list is in the form of a vote-count dataset on some removable storage.
- At county election HQ, there are counting devices that count up vote-by-mail ballots and provisional ballots, with the same kind of vote-counts.
- At county election HQ, "tabulation" is the process aggregating these vote-counts and adding them up, to get county-wide vote totals.
In Saguache, election officials did a tabulation run on election night, but the results didn't look right. Then on the 5th, they did a re-run on the "same ballots" but the results were different, and it appears to some observers that some vote totals may be been overwritten. Then, on the 8th, with another re-try, a result somewhat like in NY:
... the disc would not load and sent an error message
What this boils down to for me is that current voting system products' Tabulators are not up to correctly doing some seemingly simple tasks correctly, when operated by ordinary election officials. I am sure they work right in testing situations that include vendor staff; but they must also work right in real life with real users. The tasks include:
- Import an election definition that specifies how many counting devices are being used for each precinct, and how many vote-count datasets are expect from them.
- Import a bunch of vote-count datasets.
- Cross-check to make sure that all expected vote-totals are present, and that there are no un-expected vote-counts.
- Cross-check each vote-count dataset to make sure it is consistent with the election definition.
- If everything cross-checks correctly, add up the counts to get totals, and generate some reports.
That's not exactly dirt-simple, but it also sounds to me like something that could be implemented in well-designed software that is easy for election officials to use, and easy for observers to understand. And that understanding is critical, because without it, observers may suspect that the election has been compromised, and some election results are wrong. That is a terrible outcome that any election official would work hard to avoid -- but it appears that's what is unfolding in Saguache. Stay tuned ...
PS: Hats off to the Valley Courier's Teresa L. Benns for a really truly excellent news article! I have only touched on some of the issues she covered. Her article has some of the best plain-language explanation of complicated election stuff, that I have ever read. Please take a minute to at least scan her work. - ejs
In my last post, I recounted an incident from Erie County NY, but deferred to today an account of what the technology troubles were, that prevented the routine use of a Tabulator to create county-wide vote totals by combining count data from each of the opscan paper ballot counting devices. The details are worth considering as a counter-example of technology that is not transparent, but should be. As I understand the incident, it wasn't the opscan counting systems that malfunctioned, but rather the portion of the voting system that tabulates the county-wide vote totals. As I described in an earlier post, the ES&S system has no tabulator per se, but rather some aggregation software that is part of the larger body of Election Management System (EMS) software that runs on an ordinary Windows PC. Each opscan devices writes data to a USB stick, and election officials aggregate the data by feeding each stick into the EMS. The EMS is supposed to store all the data on the stick, and add up all the opscan machines' vote counts into a vote total for each contest.
Last week, though, when Erie County officials tried to do so, the EMS rejected the data sticks. Election officials had no way to use the sticks to corroborate the vote totals that they had made by visually examining the election-night paper-tapes from the 130 opscan devices. Sensible questions: Did the devices' software err in writing the data to the sticks? If so, might the tapes be incorrect as well? Is the data still there? It turns out that the case was a bug in EMS software, not the devices, and in fact the data on the sticks was just fine. With a workaround on the EMS, the data was extracted from the sticks and used as planned. Further, the workaround did not require a bug fix to the software, which would have been illegal. Instead, some careful hand-crafting of EMS data enabled the software to stop choking on the data from the sticks.
Now, I am not feeling 100% great about the need for such hand-crafting, or indeed about the correctness of the totals produced by a voting system operating outside of its tested ordinary usage. But some canny readers are probably wondering about a simpler question. If the data was on the sticks, why not simply copy the files off the stick using a typical PC, and examine the contents of the files directly? With 40-odd contests countywide and a 100-odd sticks and paper tapes, it's not that much work to just look at the them to whether the numbers on each stick match those on the tapes. Answer: the voting system software is set up to prevent direct examination, that's why! The vote data can only be seen via the software in the EMS. And when that software glitches, you have to wonder about what you're seeing.
This is at least one area where better software design can lead to higher confidence system: write-once media for storing each counting device's tallies; use of public standard data formats so that anyone examine the data; use of human-usable formats so that anyone can understand the data; use of a separate, single-purpose tabulator device that operates autonomously from the rest of the voting system; publication of the tally data and the tabulator's output data, so that anyone can check the correct results either manually or with their choice of software. At least that's the TrustTheVote approach that we're working out now.
Behind the election news in Buffalo, NY, there is a cautionary tale about voting system complexity and confidence. The story is about a very close race for the state Senate's 60th district. One news article includes a reference to "software problems with the new electronic voting machines in Erie County." The fundamental issue here is whether to trust the vote count numbers, in a case where the race is very close and where the voting system malfunctioned at least once, because of a software bug later identified by the vendor. If one part of the system malfunctioned, shouldn't we also be concerned that another part may also have malfunctioned? An error on even one of the over a 100 paper-ballot-counting devices could easily swamp the very small margin between the top two candidates.
Those are good questions, and as frequent readers will already know, the typical answer is "audit", that is, hand-counting a portion of the paper ballots to ensure that the hand-counts match the machine counts, using statistical science to guide how many ballots to hand count to achieve confidence that the overall election results are valid. That's what the state of Connecticut -- another recent adopter of paper ballots over lever machines -- is doing with a manual count of ballots from 73 of the 734 precincts statewide.
But that's not happening in Buffalo (as far as I can tell), where instead there is wrangling over doing a full re-count, with confusion over the voting system malfunction muddying the waters. And that's a shame, because election technology properly used (including routine audits) should not cause this kind of legal activity over the validity of an election result -- in this case an important one that could influence party control in the state Senate, with re-districting on the horizon.
But some of the finger-point goes to the technology too. What actually malfunctioned? Could the glitch have effect the election result? What can we learn from the incident? Questions for next time ...