Viewing entries tagged
EAC

2 Comments

UOCAVA Remote Voting Workshop Makes a Strong Finish

24 hours ago I, along with some others, was actually considering asking for a refund.  We had come to the EAC, NIST, and FVAP co-hosted UOCAVA Remote Voting Systems 2 Day Workshop, expecting to feast on some fine discussions about the technical details and nuances of building remote voting systems for overseas voters that could muster the demands of security and privacy.  And instead we had witnessed an intellectual food fight of ideology. That all changed in a big way today.

The producers and moderators of the event, I suspect sensing the potential side effects of yesterdays outcome -- came together, somehow collectively made some adjustments (in moderation techniques, approach, and topic tweaking), and pulled off an excellent, informative day full of the kind of discourse I willingly laid down money (the Foundation's money no less) in the first place to attend.

My hat is off; NIST and EAC on the whole did a great job with a comeback performance today that nearly excused all of what we witnessed yesterday.  Today, they exhibited self deprecating humor, and even had elections officials playing up their drunk driver characterization from the day before.

Let me share below what we covered; it was substantive.  It was detailed.  And it was tiring, but in a good way.  Here it is:

Breakout Session – Voter Authentication and Privacy

--Identified voter authentication and privacy characteristics and risks of the current UOCAVA voting process.

--Identified potential risks related to voter authentication and privacy of remote electronic absentee voting systems. For example, the group considered:

  • Ballot secrecy
  • Coercion and/or vote selling
  • Voter registration databases and voter lists
  • Strength of authentication mechanisms
  • Susceptibility to phishing/social engineering
  • Usability and accessibility of authentication mechanisms
  • Voter autonomy
  • Other potential risks

--Considered measures and/or criteria for assessing and quantifying identified risks and their potential impacts.

  • How do these compare to those of the current UOCAVA voting processes?

--Identified properties or characteristics of remote digital voting absentee voting systems that could provide comparable authentication mechanisms and privacy protections as the current UOCAVA voting process

--Considered currently available technologies that can mitigate the identified risks. How do the properties or characteristics of these technologies compare to those of the current UOCAVA voting process?

--Started to identify and discuss emerging or future research areas that hold promise for improving voter authentication and/or privacy.  For example:

  • Biometrics (e.g., speaker voice identification)
  • Novel authentication methods

--Chatted about cryptographic voting protocols and other cryptographic technologies

Breakout Session – Network and Host Security

--Identified problems and risks associated with the transmission of blank and voted ballots through the mail in the current UOCAVA voting process.

--Identified risks associated with electronic transmission or processing of blank and voted ballots.  For example, the breakout group considered:

  • Reliability and timeliness of transmission
  • Availability of voting system data and functions
  • Client-side risks to election integrity
  • Server-side risks to election integrity
  • Threats from nation-states
  • Other potential risks

--Considered and discussed measures and/or criteria for assessing and quantifying identified risks and their potential impacts.

  • How do these compare to those of the current UOCAVA voting process

--Identified properties or characteristics of remote digital absentee voting systems that could provide for the transmission of blank and voted ballots at least as reliably and securely as the current UOCAVA voting process.

--Discussed currently available technologies that can mitigate the identified risks and potential impact.

  • How do the properties and characteristics of these technologies compare to those of the current UOCAVA voting process?

--Identified and discussed emerging or future research areas that hold promise for improving network and host security.  For example:

  • Trusted computer and trusted platform models
  • End point security posture checking
  • Cloud computing
  • Virtualization
  • Semi-controlled platforms (e.g., tablets, smart phones, etc.)
  • Use of a trusted device (e.g., smart card, smart phone, etc.)

As you can see, there was a considerable amount of information covered in each 4 hour session, and then the general assembly reconvened to report on outcomes of each breakout group.

Did we solve any problems today?  Not so much.  Did we come a great deal forward in challenge identification, guiding principles development, and framing the issues that require more research and solution formulation? Absolutely.

Most importantly, John Sebes, our CTO and myself gained a great deal of knowledge we can incorporate into the work of the TrustTheVote Project, had some badly needed clarifying discussions with several, and feel we are moving in the right direction.

We clarified where we stand on use of the Internet in elections (its not time beyond anything but tightly controlled experimentation, and there is a lacking of understanding of the magnitude of resources required to stand up sufficiently hardened data centers to make it work, let alone figuring out problems at the edge.)

And we feel like we made some small contributions to helping the EAC and NIST figure out the kind of test Pilot they wish to stand up as a guiding principles reference model sometime over the next 2 years.

Easily a day's work for the 50-60 people in attendance over the two days.

Back to the west coast (around 3am for my Pacific colleagues ;-)

Its a wrap GAM|out

2 Comments

2 Comments

EAC Guidelines for Overseas Voting Pilots

election-assistance-commissionLast Friday was a busy day for the Federal Elections Assistance Commission.  They issued their Report to Congress on efforts to establish guidelines for remote voting systems.  And they closed their comment period at 4:00pm for the public to submit feedback on their draft Pilot Program Testing Requirements. This is being driven by the MOVE Act implementation mandates, which we have covered previously here (and summarized again below).  I want to offer a comment or two on the 300+ page report to Congress and the Pilot program guidelines for which we submitted some brief comments, most of which reflected the comments submitted by ACCURATE, friends and advisers of the OSDV Foundation.

To be sure, the size of the Congressional Report is due to the volume of content in the Appendices including the full text of the Pilot Program Testing Requirements, the NIST System Security Guidelines, a range of example EAC processing and compliance documents, and some other useful exhibits.

Why Do We Care? The TrustTheVote Project’s open source elections and voting systems framework includes several components useful to configuring a remote ballot delivery service for overseas voters.  And the MOVE Act, which updates existing federal regulations intended to ensure voters stationed or residing (not visiting) abroad can participate in elections at home.

A Quick Review of the Overseas Voting Issue The Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) protects the absentee voting rights for U.S. Citizens, including active members of the uniformed services and the merchant marines, and their spouses and dependents who are away from their place of legal voting residence.  It also protects the voting rights of U.S. civilians living overseas.  Election administrators are charged with ensuring that each UOCAVA voter can exercise their right to cast a ballot.  In order to fulfill this responsibility, election officials must provide a variety of means to obtain information about voter registration and voting procedures, and to receive and return their ballots.  (As a side note, UOCAVA also establishes requirements for reporting statistics on the effectiveness these mechanisms to the EAC.)

What Motivated the Congressional Report? The MOVE (Military and Overseas Voting Enhancement) Act, which became law last fall, is intended to bring UOCAVA into the digital age.  Essentially it mandates a digital means to deliver a blank ballot. 

Note: the law is silent on a digital means to return prepared ballots, although several jurisdictions are already asking the obvious question:  "Why improve only half the round trip of an overseas ballot casting?"

And accordingly, some Pilot programs for MOVE Act implementation are contemplating the ability to return prepared ballots.  Regardless, there are many considerations in deploying such systems, and given that the EAC is allocating supporting funds to help States implement the mandates of the MOVE Act, they are charged with ensuring that those monies are allocated for programs adhering to guidelines they promulgate.  I see it as a "checks and balances" effort to ensure EAC funding is not spent on system failures that put UOCAVA voters participation at risk of disenfranchisement.

And this is reasonable given the MOVE Act intent.  After all, in order to streamline the process of absentee voting and to ensure that UOCAVA voters are not adversely impacted by the transit delays involved due to the difficulty of mail delivery around the world, technology can be used to facilitate overseas absentee voting in many ways from managing voter registration to balloting, and notably for our purposes:

  • Distributing blank ballots;
  • Returning prepared ballots;
  • Providing for tracking ballot progress or status; and
  • Compiling statistics for UOCAVA-mandated reports.

The reality is, however, systems deployed to provide these capabilities face a variety of threats.  If technology solutions are not developed or chosen so as to be configured and managed using guidelines commensurate with the importance of the services provided and the sensitivity of the data involved, a system compromise could carry severe consequences for the integrity of the election, or the confidentiality of sensitive voter information.

The EAC was therefore compelled to prepare Guidelines, report to Congress, and establish (at least) voluntary guidelines.  And so we commented on those Guidelines, as did colleagues of ours from other organizations.

What We Said - In a Nutshell Due to the very short comment period, we were unable to dive into the depth and breadth of the Testing Requirements.  And that’s a matter for another commentary.  Nevertheless, here are the highlights of the main points we offered.

Our comments were developed in consultation with ACCURATE; they consisted of (a) underlining a few of the ACCURATE comments that we believed were most important from our viewpoint; (b) the addition of a few suggestions for how Pilots should be designed or conducted.  Among the ACCURATE comments, we underscored:

  • The need for a Pilot's voting method to include a robust paper record, as well as complementary data, that can be used to audit the results of the pilot.
  • Development of, and publication of security specifications that are testable.

In addition, we recommended:

  • Development of a semi-formal threat model, and comparison of it to threats of one or more existing voting methods.
  • Testing in a mock election, in which members of the public can gain understanding of the mechanisms of the pilot, and perform experimentation and testing (including security testing), without impacting an actual election.
  • Auditing of the technical operations of the Pilot (including data center operations), publication of audit results, and development of a means of cost accounting for the cost of operating the pilot.
  • Publication of ballots data, cast vote records, and results of auditing them, but without compromising the anonymity of the voter and the ballot.
  • Post-facto reporting on means and limits of scaling the size of the pilot.

You can bet this won't be the last we'll hear about MOVE Act Pilots issues; I think its just the 2nd inning of an interesting ball game... GAM|out

2 Comments

8 Comments

A License to Adopt

Open Source Technology Licensing... We've been promising to respond to the chorus of concerns that we may drift from the standard GPL for our forthcoming elections and voting systems software platform and technology.  Finally, we can begin talking about it (mainly because I found a slice of time to do so, and not because of any event horizon enabling the discussion, although we're still working out issues and there will be more to say soon.)

gplv3-127x51From the beginning we’ve taken the position that open source licenses as they exist today (and there are plenty of them) are legally insufficient for our very specific purposes of putting publicly owned software into the possession of elections jurisdictions across the nation.

And of course, we’ve heard from activists, essentially up in arms over our suggestion that current licensing schemes are inadequate for our purposes.  Those rants have ranged from the sublime to the ridiculous, with some reasonable questions interspersed.  We’d like to now gently remind those concerned that we [a] benefit from a strong lineage of open source licensing experience dating back to the Mozilla code-release days of Netscape catalyzed by Eric Raymond’s Manifesto, [b] have considerable understanding of technology licensing (e.g., we have some deep experience on our Board and within our Advisory group, and I'm a recovering IP lawyer myself), and [c] are supported by some of the best licensing lawyers in the business. So, we’re confident of our position on this matter.

We’ve dared to suggest that the GPL as it stands today, or for that manner any other common open source license, will probably not work to adequately provide a license to the software sources for elections and voting systems technology under development by the Open Source Digital Voting Foundation.  So, let me start to explain why.

I condition this with “start” because we will have more to say about this – sufficient to entertain your inner lawyer, while informing your inner activist.  That will take several forms, including a probable white paper, more blog posts, and (wait for it) the actual specimen license itself, which we will publicly vet (to our Stakeholder Community first, and the general public immediately thereafter).

The Why of it…

The reasons we’re crafting a new version of a public license are not primarily centered on the grant of rights or other “meat” of the license, but ancillary legal terms that may be of little significance to most licensees of open source software technology, but turn out to be of considerable interest to, and in many cases requirements of Government agencies intending to deploy open source elections and voting technology in real public elections, where they’re “shooting with live ballots” as Bob Carey of FVAP likes to say.

It is possible that an elections jurisdiction, as a municipal entity could contract around some of the initial six points I make below, but the GPL (and most other “copyleft” licenses) expressly disallow the placing of "additional restrictions" on the terms of the license.  And most of the items I describe below could or would be considered "additional restrictions."  Therefore, such negotiation of terms would render a standard copyleft license invalid under their terms of issuance today.

It’s not like we haven’t burnt through some whiteboard markers thinking this through – again, we’re blessed with some whip smart licensing lawyers.   For instance, we considered a more permissive license, wrapped in some additional contract terms.  But a more permissive license would be a significant decision for us, because it would likely allow proprietary use of the software – an aspect we’ve not settled on yet.

With that in mind, here are six of the issues we’re grappling with that give rise to the development of the “OSDV Public License.”  This list is by no means exhaustive.  And I'm waiting for some additional insight from one of our government contracting lawyers who is a specialist in government intellectual property licensing.  So we’ll have more to say beyond here.

  1. Open source licenses rarely have “law selection” clauses.  Fact: Most government procurement regulations require the application of local state law or federal contracting law to the material terms and conditions of any contract (including software “right to use” licenses).
  2. Open source licenses rarely have venue selection clauses (i.e., site and means for dispute resolution).  Fact: Many state and federal procurement regulations require that disputes be resolved in particular venues.
  3. There are rights assignment issues to grapple with.  Fact: Open source licenses do not have "government rights" provisions, which clarify that the software is "commercial software" and thus not subject to the draconian rules of federal procurement that may require an assignment of rights to the software when the government funds development.  (There may be state equivalents, we’re not certain.)  On the one hand, voting software is a State or county technology procurement and not a federal activity.  But we’ve been made aware of some potential parallelism in State procurement regulations.
  4. Another reality check is that our technology will be complex mix of components some of which may actually rise to the level of patentability, which we intend to pursue with a “public assignment” of resulting IP rights.  Fact: Open source licenses do not contain "march-in rights" or other similar provisions that may be required by (at least) federal procurement regulations for software development.  Since some portion of our R&D work may be subject to funding derived from federal-government grants, we’ll need to address this potential issue.
  5. There is a potential enforceability issue.  Fact: Contracting with states often requires waiver of sovereign immunity to make licenses meaningfully enforceable.
  6. In order to make our voting systems framework deployable for legal use in public elections, we will seek Federal and State(s) certifications where applicable. Doing so will confer a certain qualification for use in public elections on which will be predicated a level of stability in the code and a rigid version control process.  It may be necessary to incorporate additional terms into “deployment” licenses (verses “development” licenses) specific to certification assurances and therefore, stipulations on “out-of-band” modifications, extensions, or enhancements.  Let’s be clear: this will not incorporate any restrictions that would otherwise be vexatious to the principles of open source licensing, but it may well require some procedural adherence.

And this is the tip of the iceberg on the matter of providing an acceptable comprehensive, enforceable, open source license for municipalities, counties, and States who desire to adopt the public works of the Open Source Digital Voting Foundation TrustTheVote Project for deployment in conducting public elections.

At this juncture, its looking like we may end up crafting a license somewhat similar in nature to the Mozilla MPL.

Hopefully, we’ve started a conversation here to clarify why it’s a bit uninformed to suggest we simply need to "bolt on" the GPL3 for our licensing needs.  Elections and voting technology – especially that which is deployed in public elections – is a different animal than just about any other open source project, even in a majority of other government IT applications.

Stay tuned; we’ll have more to show and tell.

Cheers GAM|out

8 Comments

Comment

A Tale of Two Ballots

To give an idea of a some of the many aspects of ballot design that we're working on, I have a couple ballot images for you, from Larry Norden's keynote presentation at the EVT conference recently. The problem illustrated is that the first contest is spread across two columns, which looks like it might be two separate contests. As a result, a voter might make a selection in the box at lower right, and make a selection in the box at middle top -- in which case neither vote would count. In this picture, the big red arrow shows how to improve this ballot, by putting all of the candidates in one box in one column.

BrennanBetterBallot1

Fair enough. In our ballot design studio, this error probably won't even be an option -- contests wouldn't be spread across a column break or page break except in exceptional cases with really long candidate lists.

The second picture is of an ideal re-design of the same ballot, using some principles from the design standards that AIGA developed for the EAC. In addition to re-organizing the content, the layout has several usability improvements. The use of color and shading highlights the separation between contests and separation of groups of contests. It's much easier to see at a glance that there are 8 contests in 3 categories. Likewise, the straight-party voting option is clearly like a contest, rather than being in the same box with the instructions to the voter.

BrennanBetterBallot2

Again, in our ballot design studio, we're developing design templates that adopt the use of color and shading in a similar way, rather than having uniform black text and thin black lines on a plain white background.

So what's the point? This is the easy stuff! In the next few postings about ballot design, we'll show how the design improvements illustrated here are just the tip of the iceberg. The phrase "the devil is in the details" does not even begin to describe the situation! Stay tuned.

-- EJS

Comment

1 Comment

Federally Approved Voting System - Not Tested for Security

We now have a federally certified voting system product that has completed the required testing by a federally certified independent test lab. That's a milestone in itself, as is the public disclosure of some of the results of testing process. Thanks to that disclosure, though, we now know that the test lab did practically zero security testing of the Premier product, because of a gross mis-understanding of prior security review. For a complete, accurate, and brief explanation of the whole situation, I urge you to read this letter to the EAC. The letter is from a group of absolutely top-notch voting technology and/or computer security experts, who were involved in California's Top to Bottom Review (TTBR) of voting systems, which included the Premier system that was recently certified.

At the risk of over-simplifying, the story goes like this.

  1. The TTBR found loads of technical security problems and concluded that
    • the security problems were so severe that its technological security mechanisms were unable to protect the system; and
    • the problems could be addressed only with strict procedural security - chain of custody, tamper-evident seals, and the like.
  2. Next, the test lab mis-interpreted these conclusions, assuming that the  system's vulnerabilities depended only on effective procedural controls; therefore, no need to test technical security mechanisms!
  3. The test plan included no additional security tests, and hence the Premier system passed testing despite the many security flaws found in the TTBR.

That's the gist, but do read the letter to the EAC. It's a fine piece of writing in which Joe Hall and Aaron Burstein set out everything fair and square, chapter and verse. I have to say it's astonishing.

Now, maybe this seems exceptionally geeky, with cavils over test plans and test lab results, and so on. Or maybe it seems critical of the EAC/NIST testing program. But in fact that test program is incredibly important as a gate through which computing technology must pass before being used to count votes. In a very real sense, the current testing program is just getting started, so perhaps it's not surprising that there are many lessons to learn. And my thanks go to all these TTBR verterans for speaking out to remind us how much there is to learn on the road to excellence both of voting systems and of the program to test them.

-- EJS

1 Comment

Comment

Gold or Pyrite?

In a couple of prior posts, I explained the "gold copy" or "trusted build" concept, and the role of EAC, NIST, and test labs. I can't seem to completely bury this tale, because it raised another question about the processing of checking a voting system to see if it is legit: "Why is this checking so hard? Is the government doing its job here?" Yes, the government is doing its job, in that voting system products are tested, and theoretically the test results include fingerprint data the theoretically an election official could use effectively. That's the general idea. Now, I am not saying that the current test lab approach is great. It isn't, and it lacks transparency among several other defects. Nevertheless, if there were a certified system that you trusted, then the independent test process can in fact use some techniques to fingerprint the system tested, so that you could compare a government-certified fingerprint of your system to see if it was exactly the same as the tested system. It's just that with the current voting devices, the checking is  impractical and -- if done without some very-difficult-to-achieve auditable physical security measures -- almost meaningless. But that's old news from old posts.

The remaining question for today is why current systems are impractical for checking. What is it with these existing voting systems? They are not easily validated because in most cases they simply weren't designed for it. Validation was not required and few though it important at the time (the post-HAVA gold rush). Then, once a system is built, you really can't go back and tack on a "field validation" feature. The government regulatory groups do not want to "change the rules" on the vendors -- in part for the very good reason that the existing Gold-Rush-era rules (the Voluntary Voting System Guidelines of 2002, the year HAVA was enacted) are the current rules that are still in-process for revision. So for the foreseeable future, there is no pragmatic reason for existing systems to change.

And that brings us back to a frequent theme that shows the real benefit of openness: public benefit. Existing systems weren't built to be validated; it's expensive to re-architect and re-implement the systems; for-profit vendors have no incentive to do so; hence, trust benefits can only be expected to be delivered by people doing work on tech that must be to be delivered and maintained in the public trust by a public interest organization, in order to maintain public confidence.

-- EJS

Comment

Comment

Identifying the Gold, Redux

I recently commented on specific connection, in the case of the TrustTheVote project, of open source methods and the issue of identifying a "gold build" of a certified voting system. As a reminder to more recent readers, most states have laws that require election officials to use only those specific voting system products that were previously certified for use in that state -- and not some slightly different version of the the same product. But recently, I got a good follow-up question - what is the role of the Federal government, in this "gold build" identification process? There is in fact an important role, that is potentially very helpful, and where openness can help magnify the benefit of this helpful role of the government. Here's the scoop. The EAC has the fundamental responsibility for Federal certification, which is used in varying degrees as part of some states' certification. Testing is the main body of work leading up to certification. Testing is performed by private companies, that have qualified in a  NIST-managed accreditation program as an official Voting Systems Test Lab. There are two key steps in the overall process in which a test lab verifies that it can re-do the "trusted build" process to re-create the soon-to-be "gold" version, so long as the lab can verify that the trusted build process did in fact re-create the same exact software that was tested. Then, as the EAC Web site briefly states: "Manufacturer provides software identification tools to EAC, which enables election officials to confirm use of EAC-certified systems."

But here is the fly in the ointment: for your typical PC or server, this is not easy! and the same is true for current voting systems. Yes, you could crack open the chassis, remove the hard drive, examine it as the boot medium, re-derive a fingerprint, and compare the fingerprint to something on the EAC web site. But in practice this is not going to happen in real election offices, and in any case it would be fruitless -- even if you did, you would still have no assurance that the device in the precinct was still the same as the gold build, because the boot media can be written after the central office tests the device, but before it goes into use in a polling place.

That's quite an annoying fly in the ointment, but it doesn't have to be that way. In fact, for for a carefully designed dedicated system, the fingerprinting and re-checking can be quite feasible -- and that applies to carefully made voting systems too, as we've previously explained. Such carefully made voting systems would be a real improvement in trustworthiness (which is why we're building them!), but they aren't a silver bullet, since you can never 100% trust the integrity of a computing system. That's why vote tabulation audits are an important ingredient, and why I periodically bang on about auditing in election processes.

-- EJS

Comment

Comment

Open and Secret?

Scanning the news last week, I found rumors of Premier Systems (the voting system vendor formerly known ad Diebold) going open-source, and of the Federal government pondering cases where voting system test results should be confidential. An interesting juxtaposition! The first item I call a rumor not because I disbelieve the blogger in question, but because Premier hasn't announced anything. But the blog article does contain some interesting stuff, including a paraphrase of Premier's CEO opining that releasing source code to the public would be an approach that results in several beneficial outcomes.

At the other end of the spectrum we have some news from a recent meeting of the EAC's standards committee, including discussion of the new Voluntary Voting Systems Guidelines draft, intended for use in Federal certification testing of products like those from Premier. The draft would require that the result of the testing process should include documentation of all attacks the system is designed to resist or detect, as well as any vulnerabilities known to the manufacturer. Some committee members pondered whether the vendors should mark this information as confidential.

Some observers have questioned whether it would be appropriate to certify a system that has known security vulnerabilities. Others have pointed out that every system has vulnerabilities, and that the important issue is to be clear about the definition and boundaries of security, sometimes called a "threat model." Within those defined limits, customers should be very clear about what deployment and operational procedures that they need to adhere to, in order to maintain the integrity of system as defined by the vendor; beyond those limits, caveat emptor.

We might take up that debate another day. But for now, the obvious old adage applies: you can't have it both ways. If a system is truly open, then there are no secrets -- though you can try hushing up issues that can be readily discovered by directly inspecting the open system. However, I think that there is a case to be made that there really are no secrets, especially for systems that are important enough that people really do want to know "whether it really works." Next time, a couple specific examples from recently published voting machine security studies, that have put me in mind of another saying: "Open, Sesame!"

-- EJS

Comment

Comment

Stalking the Errant Voting Machine: the Final Chapter

Some readers may sigh relief at the news that today's post is the last (for a while at least!) in a series about the use of vote-count auditing methods to detect a situation in which an election result was garbled by the computers used to create them. Today, a little reality check on the use of the the risk-limiting audit methods described earlier. As audit guru Mark Lindeman says,

Risk-limiting audits clearly have some valuable properties, yet no state has ever implemented a risk-limiting audit.

Why not? Despite the rapid development of RLA methods (take a quick glance at this paper to get a flavor), there are several obstacles, including:

  • Basic mis-conceptions: Nothing short of a full re-count will ever prove the absence of a machine count error. Instead, the goal of RLA is to reduce risk that machine count errors altered the outcome of any contest in a given election. Election result correctness is the goal, not machine operations correctness -- yet the common mis-perception is often the reverse.
  • Requirements for election audits must be part of state election laws or regulation that implements them. Details of audit methods are technical, and difficult to write into law -- and detailed enough that it is perhaps unwise to enshrine in law rather than regulation. Hence, there is some tension and confusion about the respective roles states' legislative and executive branches.
  • Funding is required. Local election officials have to do the work of audits of any kind, and need funding to do so. A standard flat-percent audit is easier for a state to know how to fund, rather than a variable-effort RLA that depends on election margins and voter turnout.
  • The variability itself is a confusing factor, because you can't know in advance how large an audit will have to be. This fact creates confusion or resistance among policy-makers and under-funded election officials.
  • Election tabulation systems often do not provide timely (or any) access to the data needed to implement these audits efficiently. These systems simply weren't designed to help election officials do audits -- and hence are another variable cost factor.
  • Absentee and early-voting ballots sometimes pose large logistical challenges.
  • Smaller contests are harder to audit to low risk levels, so someone must decide how to allocate resources across various kinds of contests.

As Lindeman points out, each of these problems is tractable, and real progress in RLA practice can be made without a solution to all of these problems. And in my view, one of the best ways to help would be to greatly increase transparency, including both the operations of the voting systems (not just the tabulation components!), and of the auditing process itself. Then we could at least determine which contests in an election are most at risk even after the audits that election officials are able to conduct at present. Perhaps that would also enable experts like Lindeman to conduct unofficial audits, to demonstrate effectiveness and help indicate efforts and costs for official use of RLA.

And dare I say it, we might even enable ordinary citizens to form their own judgement of an individual contest in an election, based on real published facts about total number of ballots cast in a county, total number of votes in the contest, margins in the contest, total number of precincts, precincts officially audited, and (crank a statistics engine) the actual confidence level in the election result, whether the official audit was too little, too much, or just right. That may sound ambitious, and maybe it is, but that's what we're aiming for with operational transparency of the voting system components of the TTV System, and in particular with the TTV Auditor -- currently a gleam in the eye, but picking up steam with efforts from NIST and OASIS on standard data formats for election audit data.

-- EJS

Comment

Comment

New Voting System Vendors Enter the Certification Fray

Here are a couple interesting news tidbits to ponder today, showing the breadth and depth of openness to changes to current U.S. voting methods. First, some news from the EAC, the part of the Federal government that runs the program for Federal certification of voting systems -- certification that in many states is effectively a pre-requesite for legal use of a voting system in the state. The EAC announced that there are 2 new vendors who have joined the certification program -- both of them vendors of products that are often called "Internet voting systems".

It's very much worth noting that this does not mean that Scytl and EveryoneCounts have certified products! Far from it, though no doubt some approximate language might be misleading on that point. It just means that the companies have qualified with EAC, and may at some point choose to engage with an EAC-certified test lab to evaluate their products. But it is still interesting that this announcement gives the appearance that Internet voting systems might someday be legal for use in the U.S.

Second, some news from Wisconsin about that state's 5-year mission to figure out what would be a better way to run voting in WI, with pretty much all options on the table to investigate, including a switch to paper ballots solely like MN, all vote by mail like OR, even Internet voting as in a few local election organizations in WA and HI.

It's gratifying to see how serious people are about the fact that current election approaches need serious improvement, and the improvements have to be undertaken carefully and thoughtfully.

EJS

Comment

Hasty Innovation: the Kind We Don't Need

Today's posting landed in my lap in the form of a note from election tech colleague and Pitt researcher Collin Lynch, as part of a discussion about the role of the Federal government (specifically the Election Assistance Commission, or EAC) in "fostering innovation" in the market for voting systems, and ensuring a "healthy market". Well, of course, we think that there is plenty of room for improvement in voting systems, but there is a big difference between (for example) improved usability or reliability, and innovative changes voting system administration that require election officials to change how they do their job. But Collin hit the nail on the head:

Speaking as a software developer I think the cry for "supporting innovation" comes from two mistaken impressions.

  1. The mistaken impression that voting laws should somehow be concerned with the "health of the market", that is, the EAC's responsibility includes not only the stability of our democracy (difficult enough as it is) but also maintaining "healthy market" for the products of voting system vendors. This is a view that has caught on to some extent in defense spending and other areas but in my view a market exists soley to serve some need and artificially inflating that need, at best, tilts the market to no good end.
  2. The mistaken impression that the technology development process can be "stimulated," especially when the market has real needs for problems to be fixed, problems that appear simple and therefore should be fixed quickly. For voting systems in particular, this is not true.

These are fundamental misunderstandings of how good systems development works and how it can be made to work. In the extreme I have seen purchasers of systems assume that programmers "can just work faster" without considering the costs of quality and stability this brings. This is a view encouraged by the (seemingly) breathless pace of .com development which of course ignores the long lead time many of these overnight successes have and the stability problems that result from alphas being rushed to market.

I couldn't agree more. The last thing that we need in voting systems is encouraging vendors' already-existing bent for innovative bell/whistle features that customers (election officials) didn't ask for, and/or pushing for updated systems to be developed quickly and rushed through the regulatory certification and accreditation process. And while we at the TrustTheVote project are hardly in favor of foot-dragging, we do also recognize that quality, reliability, simplicity, integrity, and many other important qualities, fundamentally start with understanding what the customers need, and making sure that we build to those needs -- and with the attention to quality and reliability that is needed to make it through the regulatory process as well.

-- EJS