Viewing entries tagged
voting system

1 Comment

Comments Prepared for Tonight's Elections Technology Roundtable

This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.

We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems.  These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.

Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project

We appreciate an opportunity to participate in this important discussion.  We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.

1. Certification

For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point.  Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model.  This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.

We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system.  There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.

We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California.  Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.

2. Standards

2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component.  Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner.  With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."

The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem.  And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote.  They deserve congratulations for reaching this imperative state of data standards détente.  It's not finished, but the effort and momentum is there.

So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.

3. Commercial Revitalization

Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry.  Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation.  Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.

One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery.  To that point, let us offer an important clarification and dispel a misconception about rolling your own.  This does not mean that elections officials are about to be left to self-vend.  And by that we mean self-construct and support their open, standard, commodity voting system components.  A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type.  Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software.  Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.

Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation.  To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.

In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.

 

EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California.  We don't read SB-360 that way at all.  We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest.  In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed.  We simply need to modernize how certification works to enable this kind of innovation.  We think our comments today bear that out.

Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360.  In sum and substance the agenda is to discuss:

  1. Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
  2. Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
  3. Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.

Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions.  We think this allegation is also misinformed for two reasons: [1] assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and [2] notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned.  From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away.  And indeed, this is what we've witnessed over the years in other jurisdictions.

Onward.

1 Comment

Comment

"Why is There a Voting Tech Logjam in Washington

"Why is There a Voting Tech Logjam?" -- that's a good question! A full answer has several aspects, but one of them is the acitivty (or in-activity) at the Federal level, that leads to very limited options in election tech. For a nice pithy explanation of that aspect, check out the current issue of the newsletter of the National Conference of State Legislators, on page 4. One really important theme addressed here is the opportunity for state lawmakers to make their decisions about what standards to use, to enable the state's local election officials make their decisions about what technology to make or adopt -- including purchase, in-house build, and (of course) adoption and adaptation of open-source election technology.

-- EJS

Comment

1 Comment

TrustTheVote on HuffPost

We'll be live on HuffPost online today at 8pm eastern:

  • @HuffPostLive http://huff.lv/Uhokgr or live.huffingtonpost.com

and I thought we should share our talking points for the question:

  • How do you compare old-school paper ballots vs. e-voting?

I thought the answers would be particularly relevant to today's NYT editorial on the election which concluded with this quote:

That the race came down to a relatively small number of voters in a relatively small number of states did not speak well for a national election apparatus that is so dependent on badly engineered and badly managed voting systems around the country. The delays and breakdowns in voting machines were inexcusable.

I don't disagree, and indeed would extend from flaky voting machines to election technology in general, including clunky voter record systems that lead to many of the lines and delays in polling places.

So the HuffPost question is apposite to that point, but still not quite right. It's not an either/or but rather a comparison of:

  • old-school paper ballots and 19th century election fraud;
  • old-school machine voting and 20th century lost ballots;
  • old-school combo system of paper ballots machine counting and botched re-counting;
  • new-fangled machine voting (e-voting) and 21st century lost ballots;
  • newer combo system of paper ballots and machine counting (not voting).

Here are the talking points:

  • Old-school paper ballots where cast by hand and counted by hand, where the counters could change the ballot, for example a candidate Smith partisan could invalidate a vote for Jones by adding a mark for Smith.
  • These and other paper ballot frauds in the 19th century drove adoption in the early 20th century of machine voting, on the big clunky "level machines" with the satisfying ka-thunk-swish of the level recording the votes and opening the privacy curtain.
  • However, big problem with machine voting -- no ballots! Once that lever is pulled, all that's left is a bunch of dials and counters on the backside being increased by one. In a close election that requires a re-count, there are no ballots to examine! Instead the best you could do is re-read each machine's totals and re-run the process of adding them all up in case there was an arithmetic error.
  • Also, the dials themselves, after election day but before a recount, were a tempting target for twiddling, for the types of bad actors who in the 19th century fiddled with ballot boxes.
  • Later in the 20th century, we saw a move to a combo system of paper ballots and machine counting, with the intent that the machine counts were more accurate than human counts and more resistant to human meddling, yet the paper ballots remaining for recounts, and for audits of the accuracyof machinery of counting.
  • Problem: these were the punch ballots of the infamous hanging chad.
  • Early 21st century: run from hanging chad to electronic voting machines.
  • Problem: no ballots! Same as before, only this time, the machins are smaller and much easier to fiddle with. That's "e-voting" but wihout ballots.
  • Since then, a flimsy paper record was bolted on to most of these systems to support recount and audit.
  • But the trend has been to go back to the combo system, this time with durable paper ballots and optical-scanning machinery for counting.
  • Is that e-voting? well, it is certainly computerized counting. And the next wave is computer-assisted marking of paper ballots -- particularly for voters with disabilities -- but with these machine-created ballots counted the same as hand-marked ballots.

Bottom line: whether or not you call it e-voting, so long as there are both computers and human-countable durable paper ballots involved, the combo provides the best assurance that niether humans nor computers are mis-counting or interfering with voters casting ballots.

-- EJS

PS: If you catch us on HP online, please let us know what you thought!

1 Comment

Comment

Voting System (De)Certification, Reloaded (Part 3 of 2)

Thanks to some excellent recent presentations by EAC folks, we have today a pleasant surprise of an update to our recent blogs Voting System Decertification: A Way Forward (in Part 1 and Part 2). As you might imagine with a government-run test and certification program, there is an enormous amount of detail (much of it publicly available on the EAC web site!) but Greg and I have boiled it down to a handful of point/counterpoints. Very short version: EAC seems to be doing a fine job, both in the test/certification/monitoring roles, and in public communication about it. At the risk of oversimplifying down to 3 points, here goes: 1. Perverse Incentive

Concern: ES&S's Unity 3.2.0.0 would be de-certified as a result of EAC's investigation into functional irregularities documented in Cuyahoga County, Ohio, by erstwhile elections direction Jane Platten (Kudos to Cuyahoga). With the more recent product 3.2.1.0 just certified, the "fix" might be for customers of 3.2.0.0 to upgrade to the latest version, with unexpected time and unbudgeted upgrade expense to customers, including license fees. If so, then the product defect, combined with de-certification, would actually benefit the vendor by acting to spur customers toward paid upgrades. Update: Diligent work at EAC and ES&S has resulted in in ES&S providing an in-place fix to its 3.2.0.0 product, so that EAC doesn't have to de-certify the product, and customers don't have to upgrade. In fact, one recent result of EAC's work with Cuyahoga County, the county was able to get money back from the vendor because of the issues identified.

Next Steps: We'll be waiting to hear whether the fix is provided at ES&S's expense (or at least no cost to customers), as it appears may be the case. We'll also be watching with interest the process in which version 3.2.0.0+ fix goes through the test and certification process to get legal for real use in elections. As longtime readers know, we've stressed the importance of the emergence of a timely re-certification process for products that have been certified, need a field update, and need the previously used test lab to test the updated system with testing that is as rigorous as the first time, but less costly and more timely.

2. Broken Market

Concern: This situation may be an illustration of the untenable nature of of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to forego revenue when otherwise for-pay upgrades are required because of defects in software.

Update: By working with the vendor and their test lab on both the earlier and current versions of the product, all customers will be able to obtain a no-fee update of their existing product version, rather than being required to do a for-fee upgrade to a later product version. Therefore, the "who pays for the upgrade?" question applies only to those customers who actually want to to pay for the latest version.

Next Steps: Thanks to the EAC's new process of publishing timelines for all product evaluation versions, it should be possible to compare the timeframe for the original 3.2.0.0 testing, the more recent 3.2.1.0 testing, and the testing of the bug fixed version of 3.2.0.0. We can hope that this case demonstrates that a re-certification process can indeed be equally rigorous, less costly, and more timely.

ES&S Evaluation Timeline

3. Lengthy Testing and Certfication

Concern: The whole certification testing process costs millions and takes years for these huge voting system products of several components and dozens of modules of software. How could a re-test really work at a tiny fraction of a fraction of that time and cost?

Update: Again, thanks to publishing those timelines, and with experience of recent certification tests, we can see the progress that EAC is making towards their goal that an end-to-end testing campaign of a system to be less than 9 months and a million dollars, perhaps even quarter or a third less. The key, of course, is that a system be ready for testing. As we've seen with some of the older systems that simply weren't designed to meet current standards, and weren't engineered with a rigorous and documented Q&A process that could be disclosed to a test lab to build on, well, it can be a lengthy process -- or even one that a vendor withdraws from in order to go back and do some re-engineering before trying again.

Next Steps: A key part of this time/cost reduction is EAC's guidance to vendors on readiness for testing. That guidance is another relatively recent improvement by EAC. We can hope for some public information in future about how the readiness assessment has worked, and how it helped a test process get started right. But even better, EAC folks have again repeated a further goal for time/cost reduction, but moving to voting system component certification, rather than certifying the whole enchilada - or perhaps I should say, a whole enchilada, rather than the whole plato gordo of enchilada, quesadillas, and chile relleno, together with the the EMS for it all with its many parts - rice, frijoles, pico de gallo, fresh guacamole ... (I detect an over-indlugence in metaphor here!)

One More Thing: As we've said before, we think that component level testing and re-testing is the Big Step to get whole certification scheme into a shape that really serves its real customers - election officials and the voting public. And we're proud to jump in and try it out ourselves -- work with EAC, use the readiness assessment ourselves, do a pilot test cycle, and see what can be learned about how that Big Step might actually work in the future.

-- EJS

Comment

Comment

Introducing the TrustTheVote Tabulator

Taking a break from news and commentary on election operations issues, I thought it would be an appropriate time to talk about current TrustTheVote project efforts that are very relevant to the activities of many election officials right now: tabulating election results, as part of the process of certifying results and finishing the operations for the 2 Nov. election. First of all, what is "tabulation"? The word can mean several things in the election context, but we use it to describe the final step of gathering vote data. At the end of election day, there are a number of counting devices that have counted votes on ballots, for example, optical scan counting machines that recorded votes from paper ballots, or DREs that directly recorded votes cast on a computer. Each of these devices has a blob of data that includes what some call "tallies" -- that is, for each ballot candidate or referendum choice, the total number of votes cast for that choice on that counting device. (Confusingly, some vendors call these counting machines "Tabulators" as well.)

"Tabulation" is the process of aggregating all that tally data, checking it for completeness and correctness, andcombining it into a larger tally of all votes cast in the election jurisdiction. In some cases, like a county commissioner contest, that combined tally represents the election results, the total number of votes for each candidate in the contest. In state and Federal contests and questions, the jurisdictional tally is only part of a total, e.g. one county's totals for a state or Federal Senate contest, that is then used by state level officials when they aggregate several count totals to create state-level election results.

With that in mind, the next question is: how is tabulation performed? Well, in the TTV system that we're building, tabulation is done by a dedicated system called (boringly enough) the "Tabulator" that does nothing but tabulation. However, in the proprietary voting systems in use today, there is no Tabulator per se. Tabulation is done as one of many functions of a larger body of software. Here are two examples.

  • For some of the older DRE systems, one DRE can serve as an "aggregator" for a group of DREs. Election officials remove magnetic storage media from each DRE, to get the tally data out of the DRE; then one by one they insert the media into the aggregator, which combines all the tally data and writes it out to another storage media. Not a bad idea, aggregation, but not such a good idea for the aggregator to be one function of a device whose main job in life is sensing voters' fingers on a touch screen.
  • For more recent voting systems, tabulation is one among many functions of a voting system component that is sometimes called the "election management system" or EMS. One definition of an EMS is: a software system that implements every voting system function that isn't done by a separate voting machine. Tabulation is just one of those functions, along with keeping a database of precincts, districts, splits, polling places, counting devices, candidates, contests, questions, ballot contest order, ballot candidate order, candidate rotations, ballot configurations, actual ballot documents, and of course lots of software and user interface for managing all of this election management data and generating reports from it.

Why is this a bad idea? Tabulation is among the most important functions of a voting system, and if the tabulation software doesn't work right, then the election results can be wrong. Lumping tabulation software in with a bunch of other related software means that errors elsewhere in the lump can effect tabulation; and every time the software in the lump is changed, it could effect the tabulation software even if the tabulation software hasn't changed. And worse, in today's proprietary closed system, it is just not feasible to know whether tabulation software has been effected by any other element of the total system.

And even worse, tabulation is so simple and so important that the software for it should rarely need modification at all! And further still, the larger and more complex you make a lump of software, the greater the odds that a bug somewhere in the lump (and all software has bugs) will effect the correct operation of one or more parts of the lump. In software engineering-speak, large monolithic systems, with code changes accreted over years, are much more likely to fail than a composite system of separate modules, each of which is largely insulated from changes to and errors in other modules.

Introducing the TTV Tabulator: a completely separate, hardened, single-purpose computing device for doing nothing but tabulation, and doing it in a completely transparent way where people can feasibly check on the correct operation of the software. We're at work now on getting the Tabulator ready for outside testing, so the timing of this introduction is doubly timely. In future breaks from other blog streams, I'll be saying more about this deceptively simple computing device, how it works, why it works that way, and how it delivers on the needed transparency and accountability.

-- EJS

Comment

Comment

NY Times: Hanging Chad in New York?

NYT reported on the continuing counting in some New York elections, with the control of the NY state house (and hence redistricting) hanging in the balance. The article is mostly apt, but the reference to "hanging chad" is not quite right. FL 2000's hanging chad drama was mainly about the ridiculous extreme that FL went to in trying to regularize the hand re-counting rules for paper ballots, while each time a ballot was touched, the rule could change because the chad moved. In this NY election, the problem is not a re-count, but a slow count; not problems with the paper ballots per se, but with the opscan counting system; and not a fundamental problem with the ballot counting method, but several problems arising from poll-worker and election officials' unfamiliarity with the system, being used for the first time in this election. Best quote:

Reginald A. LaFayette, the Democratic chairman of the Board of Elections in Westchester, said older poll workers had trouble reading the vote tallies printed out by the machines. “You take the average age of an inspector, it’s maybe about 65, and so you put a new product out with them, and the change has been overwhelming to some of them,” he said.

It's another example of the of situation I recently described in North Carolina. These voting systems were built for time-to-market, rather than designed, engineered, and tested for usability and reliability -- much less designed for simplicity of the process of combining tallies into election results.

The recent experience in New York is nothing truly new - but rather an example of the usability issues manifested in an election organization that, unlike those elsewhere using similar voting system products, has not yet learned by experience how to use these quirky systems with greater speed and transparency than the first time around. Of course, it is a shame that this type learning-by-doing in real elections is necessary at all, to get to a reasonably transparent election process. But that's what the vendors served up the market, and that's what TTV is working to improve on.

-- EJS

Comment

Comment

San Francisco Voting Task Force Public Hearing

Tomorrow night starting at 4:30PM the San Francisco Voting Systems Task Force is holding a Public Hearing to intake testimony and public comment on its draft prospective recommendations topics.  [Disclosure: I am a member of this Task Force, appointed by the S.F. City & County Board of Supervisors.] We encourage everyone who can make it to attend and give us your input on these draft proposed recommendations.  This is an early stage document and does not represent any final recommendations of the VSTF.  The Agenda and description can be found here.  The location of the meeting is:

SFCitySeal1 Dr. Carlton B. Goodlett Place, Room 34 Lower Level San Francisco, California

If you can't make it in person, no worries as we're accepting written input through the 24th of February, which you can submit digitally if you wish to: voting.systems.task.force@sfgov.org or by U.S. Mails (address details on site here).

For those interested in some details; I submitted a letter to the Task Force Chair with some comments of my own on our Draft recommendations under consideration document, and you may wish to have a look at them here.

Cheers GAM|out

Comment

Comment

To Trust or Not to Trust, That is the Question

I thought I'd share a comment and response I got about trusting software to count votes. The comment was a very sensible one, though a mis-perception: that TTV is suggesting that software should be trust to count vote correctly. Not so! Here is the true but less simple story.

  • Many election officials want to conduct elections with paper ballots.
  • Most of those election officials want to count paper ballots using optical scanning and analysis software.
  • Most of those election officials conduct statistical audits, in order to mitigate risk that the tabulation software malfunctioned in a way could have effected the election result.

In other words, the latter group of election officials don't trust the software to do the vote counting right, and use selective hand-count audit in addition to software counting.

  • TTV development of scanning/tabulation software does not depend on the election officials' choices on how to conduct audits as part of an optical scan/tabulation method.

In other words, we don't make any assumptions about whether or how people trust software, and what additional non-technological steps they take to mitigate risk. To repeat what you may have heard me say before, we just make the technology; we don't tell the administrators how to deal with the risks that they manage, but we do listen to them to make sure that we're making technology that they can manage in the way that they want to. If their audit scheme can be improved by new features of the software, then we want to learn enough to provide features that are truly helpful.

-- EJS

Comment

Comment

Sequoia Announces Published Source Code

Sequoia Voting Systems announced today that they will be moving towards a disclosed-source model in which they will soon begin publishing their source code. I must say that the tone and language of the press release is gratifying, especially that they thought to say that the product is also open-data, which is critical for the goal of transparency of operation of a voting system. But perhaps the most satisfying is the about-face on security by obscurity. Sequoia's VP of R&D, Eric D. Coomer, PhD, was quoted:

Security through obfuscation and secrecy is not security. Fully disclosed source code is the path to true transparency and confidence in the voting process for all involved.

I couldn't agree more! Even though the product is still proprietary (disclosed-source not open-source), it's nice to see a vendor come around to the idea that open is not weak, and indeed to have taken the leap to do R&D to make a product that they say was intended from the beginning to be disclosed.

-- EJS

Comment

2 Comments

Tales From Real Life: Testing

Another in our series of real life stories ... how it actually works for real election officials to test a new voting system that they might be adopting for use in the state. The backplot is that New York State has been unwilling to give up its admittedly no-longer-legal*  lever machines, until the the state Board of Elections is confident that they have a replacement that not only meets Federal standards, but also is reliable and meeting similar requirements met by the old lever machines. There have been several setbacks in the adoption process, but the latest phase is some detailed testing of the candidate systems. (For the real voting tech geeks, what's being tested is a hybrid of the Dominion ImageCast scanner/Ballot Marking Device and the ES&S DS200 scanner with the Automark BMD.)

Bo Lipari is is on the Advisory Committee for this process, and has reported in detail on the testing process. You don't have to be a complete election geek to scan Bo's tales, and be impressed with the level and breadth of of diligence, and the kinds of kinks and hiccups that can occur. And of course the reportage is very helpful to us, as a concrete example of what kind of real life testing is needed for any new voting system, open or closed, to be accepted for use in U.S. elections.

-- EJS

* No-longer-legal means that NY state law was changed to require replacement of lever machines. In the initial release of this note I erroneously said that the replacement requirement was driven by HAVA. Thanks again to alert readers (see comments below) for the catch, and for providing many resources on the vexed question of "HAVA compliant" generally and lever machines specifically.

2 Comments

Comment

The Future of Voting Systems in Los Angeles County

ole0This past week I was privileged to be invited to an engaging and very informative  event hosted by the Caltech/MIT Voting Technology Project on Caltech's Pasadena campus.  Turns out that L.A. County is in the early stages of figuring out "where to from here" for their next generation elections systems technology, and this event was the launch of "VSAP" their Voting Systems Assessment Project.  And they cleverly* asked the Caltech/MIT VTP to assist them in this process, framing their assessment and search in terms of "Technology, Diversity, and Democracy." My Take Away: With all due respect to the innovative thinking stirring in States working with the TrustTheVote Project, such as North Dakota, New York, New Hampshire, Oregon or Washington, to name a few, Los Angeles County stands to become the benchmark for what can be done, if for no other reasons than:

  1. they are far and away the largest voting district in the nation,
  2. they have unspent HAVA funds and CA bond measure proceeds they must invest in voting and elections technology improvements (or run the risk -- however remote, but politically disastrous -- of losing these appropriations), and
  3. they are acting in a manner I see as impressively innovative.

So, let me share why I believe this, what I learned last Wednesday, and how I see this impacting the work of the TrustTheVote Project (and vice-versa).

The Tail Wagging the Dog

Los Angeles County is the largest and most diverse election jurisdiction in the nation, serving more than four million registered voters of a wide range of race, ethnicities, national origins, age groups, and socio-economic status. The sheer physical size of their jurisdiction is impressive covering over 4080 square miles, encompassing 88 cities and over 500 political subdivisions, with 4,883 precincts and 4,394 polling locations supporting the casting of over 3.3M ballots in six languages in the last general election cycle.  That's ridiculous in size and complexity.

I refer to this as the tail wagging the dog (still a funny visual), because while the State of CA is the largest the state of the union and one of the most significant global economic powers in its own right, it is LA County that represents the single largest elections jurisdiction of the State and the nation.  This is, from what I could ascertain, a significant point because I believe LA County is dead-set on exercising forethought, visionary leadership, and setting the bar for not only election systems complexity, but possibly excellence in choice, implementation, and operation.   Ultimately, LA County may be the standard (and trend) setter, and I am certain they will be a significant influence on the work of the TrustTheVote Project.

Show Us the Money

And we all know how money talks.  There is a -- let's just say "non-trivial" --  amount of financial resource at their disposal.  And this isn't a matter of indiscriminate spending in harsh economic times.  No, these are previously allocated dollars courtesy of State and Federal programs directed at specifically upgrading and improving elections systems and processes.  And so one can expect the herds rushing to the trough in hopes of relieving LA County of some of that purse.  And that's where this could get interesting.

The elections & voting systems industry (if we can call it that) is a wreck; consolidation continues, there are essentially 2 vendors left and very possibly there will be only one remaining by next year.  Frankly, I would be shocked if ultimately LA County chose yet another legacy vendor's monolithic solution of yesteryear technology with draconian service agreement commitments as their "next generation."    And there is no love-loss on their current ES&S InkaVote Plus system.   Moreover, voting systems certification remains a confusing hurdle.   All indications from this event are that decision makers are finished with any notion of proprietary or black box solutions.

Yet, there is no doubt, none at all, that the complexities, scale, and integration requirements for LA County will be too difficult and rich a prospect for a start-up, some well intentioned fly-by-night project, or advocates' wishful thinking about an opportunity to bring in wholesale revolution to a solution, born of either an academic, philanthropic or entrepreneurial vision.

However, the work of the TrustTheVote Project, Caltech/MIT VTP, and other efforts will certainly have a role to play in assisting LA County in its assessment, prototyping, and ultimate selection of a new platform or (more likely) components thereof.

But the financial wherewithal means one more important thing: the opportunity to do things carefully and correctly.  That leads me to point three.

Evolve -- Immediately

LA County officials understand that innovation is the ability to see change as an opportunity and not a threat.  While Ken Bennett, L.A. County's IT Director in charge of elections systems made a compelling case that any upgrades or improvements must be evolutionary in order to protect operational continuity, Dean Logan, L.A. County Registrar, set an imperative tone about the importance of taking this perhaps once in a generation opportunity to make every effort to be as innovative as possible, and with little delay given their financial and operational mandates.

And with the remainder of this post I want to speak a little to how it appears that's going to happen in L.A. County.

The Registrar-Recorder/County Clerk for L.A. County used the event as a launch pad for an ambitious and unprecedented Voting Systems Assessment Project (VSAP) to determine the current and future needs to be address through the modernization of the County's voting system.

L.A. County's approach is a marked departure and new arrival in the effort to improve a jurisdictions' elections and voting systems technology.  For myself, I find it a stark contrast to the approach taken by the City and County of San Francisco where I am seated as a member of their "San Francisco Voting Systems Task Force."

L.A. County immediately sought out the CalTech/MIT VTP to facilitate a process which I will explain in further detail in a follow-up post, and side-stepped (for now) the incredible bureaucratic overhead of a formal Board of Supervisors empowered Task Force.

While the City of San Francisco has good reasons and laudable goals for their far more formal approach, the downside is that the very regulations (1953 Brown Act and Sunshine Ordinances) put in place to ensure transparency, we're created in the Industrial Age, with IMHO arguably Agrarian Age thinking, and now are actually stifling the potential transparency, agility, and capabilities of the Digital Age.

Bottom line: the SF-VSTF has spent three months essentially organizing itself due to the highly restrictive nature of the regulations that inhibit if not outright prohibit any communications -- even for organizational purposes -- between the Task Force Members (including notably eMail) if the number of recipients to those communications constitutes what would be construed as a quorum.

And honestly, L.A. County accomplished more in a single day of 6 hours last week than we've done in 8 hours worth of meetings across 3 months at the S.F. Voting Systems Task Force.  Ouch.

The result for the SF-VSTF: a highly lethargic process that although intended to ensure transparency to the processes, is actually not as transparent as possible in this era of social media.  Although a Twitter account exists for the SF-VSTF, it has remained silent.  And talks of Wikis, Blogs, or public online repositories have been all but shut down at mention.  The City Attorney's argument is that not everyone has online access and this approach would aggravate a digital divide.  Maybe so.  Maybe so.  But I have to believe there are ways to meet the Sunshine needs of those few remaining citizens with no way to reach a web browser, while leveraging the power and capability of the Digital Age to empower San Francisco to advance their imperative agenda.

Enough.  I'm writing about the Future of L.A. County Voting Systems.

So, contrast this (S.F. County efforts)  to L.A. County.  The VSAP seeks to establish a new participatory approach that initiates the process with public input to ensure the "people" element is well balanced with those of the "technology" and "regulatory" elements.  And how are they doing it?  With Symposiums as they held last week, for sure.  And through focus groups.  And through citizen's committees to gather and ingest this input.

And perhaps most importantly (as explained to me by one of their officials): they will use every appropriate aspect of the Internet and digital media to advance their efforts, engage the public, and ensure the widest access to their work and research of others -- globally.

And that just makes such sense -- especially if you're going to lead in the digital age.

And while L.A. County's approach (my volunteer efforts there) invigorates both my sense of the importance of what we're trying to do on the SF-VSTF and the work the TrustTheVote Project with several States and jurisdictions, the L.A. County effort also frustrates me in witnessing how the very ordinances designed to ensure transparency on process are likely going to stymie the best intentions of the San Francisco City & County Voting Systems Task Force.

I campaigned for and earned a seat on the SF VSTF with visions of San Francisco -- in the heart of the world's leading technology center -- leading the digital democracy and "we.gov" movement because of the opportunity to leverage the very best that social media, technology, and the Internet can provide to large-scale public collaborative efforts to invigorate the modernization of its elections and voting systems.  Well, for San Francisco, maybe not so much after all.

Perhaps at some point, someone with the wherewithal to modernize the Brown Act and related Sunshine Ordinances, will do so by realizing (as LA County has) that innovation is the ability to see change as an opportunity and not a threat.

In the mean time, here is to the real leader in California.  Hail to the vision, determination to innovate, and thought leadership of the Los Angeles County Recorder-Registrar.  They are, after all the largest voting jurisdiction in the nation; if their challenges can be met, they will be the de-facto benchmark for all other jurisdictions.

So, somewhat unexpectedly, innovation and leadership in modernizing elections technology may not emanate from the Silicon Valley, but in Southern California instead.

That observed, I still believe there is learning to be had, that this is a (bear with me) a "teachable moment" for the SF-VSTF, and we would do well in San Francisco to track L.A. County's progress.

Meanwhile for the work of the TrustTheVote Project here, I see enormous synergies and opportunities to assist L.A. County.  They are thinking about technology transparency; they are considering how to evolve (but quickly); they are leaving nothing off the table; and they are interested in exploring, examining, and study.  They understand the importance of prototyping, the process of design for usability, the imperative of design for accessibility, and making damn certain they make the best informed decisions possible.  And they know they need to leverage the power of the Internet, social media, and the digital age to do all of this.

In my next post I'll detail how L.A. County is proceeding with VSAP and offer some more about how the TrustTheVote Project will likely be of high value to them.

GAM|out

______________

* I wrote "cleverly" above because L.A. County might well have taken S.F. County's formal approach to creating a Task Force, but in casual conversation with LA County officials and folks at Caltech/MIT VTP, what I learned was they made a conscious decision to avoid formalization at this juncture.  And in fact they wished to avoid the very bureacratic complexities that would be wrought by the formality of a sanctioned Task Force.   Instead, they creatively reached out to the Caltech/MIT VTP and asked for their assistance in producing the Symposium, holding it on their Campus, and enabling the Registrar-Recorder to move very quickly in an agile fashion.  This was not -- they stressed -- in effort to avoid public participation or side-step government processes to ensure transparency, but rather to jump-start a process and use it as an information gathering vehicle.  Then, they will utilize citizen committees to advance the important efforts of public input.  I call that clever.

Comment

Comment

Voting Machine Monopoly?

It looks like the largest U.S. voting system company will acquire the second-largest, creating a potential monopolist controlling about three quarters of the market nationally, and 100% in some regions. I could explain why that might seem like a bad idea to many people, but the New York Times' The Business of Voting Machines already said it better than I. Likewise, one of our election integrity colleagues, Rob Ritchie of FairVote has already explained some of the details and implications in the HuffPost's Diebold's End: Consolidation of Largest Voting Companies Shows Need to Reform Elections. What I'd like to point out instead is how the pricing of the deal shows that there is in fact no real market for voting systems in the U.S. -- not in the typical entrepreneurial sense of "healthy market" in which players with superior value can effectively compete. This deal basically shows that Dieboild's Premier Election Systems, Inc. (PESI) is essentially worthless. The nominal price tag on the deal is $3 million, but ES&S is paying that small sum only on the condition that Diebold retain some PESI's liabilities.

So here is a story that should convince you -- if you were seriously thinking of becoming a new vendor of voting systems in the U.S. -- not to bother. Diebold/PESI was new vendor, that now after 6 years of hard work, and obtaining about a quarter market share, finds that the company's value is essentially nil, and further bedeviled by ongoing legal wrangles with its customers. And that new vendor started with the great opportunity of states awash in Federal funds from HAVA to buy new voting systems! Today, customer budgets are tighter, certification costs are higher, there are only handful of deals on the table for new revenue on new products, and ongoing customer contracts require continuing service and support for older products.

That story tells me the U.S. market for voting systems is essentially broken. It's still true that in that dysfunctional market, there is a basic conflict between making money and serving the public interest in elections. But now we can see that in selling voting system products in the U.S., it isn't even possible to make money! The largest remaining vendor may retain a healthy U.S. business, but I suspect it will be based on the strengthened ability to structure some profitable service and support contracts as the largest player -- and in some cases the only player.

Lastly, why do I keep saying "U.S. market"? Because Diebold is staying in the elections business in Brazil, and ES&S has plenty of business overseas as well -- mostly in countries that unlike the U.S., have elections run by the national government. Back here in the 50 states and thousands of elections boards, there is still plenty of work to do, to figure out how to effectively deliver the needed election and voting technology to the many government organizations that need it. What we do know now is that "the market" has not done so and likely will not in the future.

-- EJS

Comment

Comment

Can We Really Detect Flakey Voting Machines?

That's a catchy blog headline, I hope, or at least an important issue. But I've fooled you because while answering the question, I am going to discuss "audit" again. I wrote earlier that one kind of audit is performed by election officials to detect errors in voting machines, or to put it another way, to ensure that election results weren't garbled by the computers used to create them. That sounds like a good thing to detect, and ensure, but how can we understand whether the detection is effective? Today's post is the beginning of an answer to that question. And it's a very relevant question, because we know from last year's experience in Humboldt County CA that malfunctions do occur. In fact, with just the right bad luck in the locales affected, perhaps only half a dozen Humboldt-sized, Humboldt-style glitches would have been required to swing MN's close Coleman-Franken race.  And recall that each county has hundreds of opportunities for such a glitch! Five or ten malfunctions per thousands, across a medium-sized state, may not sound like a lot, but its enough to swing a major contest every few years.

To take a specific example, let's look at the voting method of paper ballots, counted by machine partly  in polling places and partly in a central facility. (Similar issues apply to other voting methods including those using touch-screens or other direct-record devices.) One audit procedure is essentially a hand-count "spot check" or partial "re-do" of the machine count. Precincts are randomly selected to get a set of precincts with enough combined ballots to exceed some threshold percentage of the vote, say 1%. Then each of these precinct's ballots are re-counted, for each contest, and the hand-count results compared to the machine count. There are often small variances -- different interpretations by people and software -- and these are scrutinized and documented to ensure that are in fact borderline interpretation cases or due to some other procedural, non-technical issue. Any substantial variation would be a sign of some potential machine malfunction, and would trigger further hand counts until the rules for the audit process are complete, or a full re-count is triggered by the audit procedure rules.

Fair enough, but in the typical case where 1% of a county's paper ballots have been audited with no errors detected, what do we actually know? How confident can we be that the remaining unaudited ballots were correctly  machine-counted? What if a race is pretty darn close, say 2% margin of error, but not so close as to trigger a recount; if 1% of ballots were audited, what can we expect about the other 99% of ballots, and the chance that machine counting errors might change the election result?

Yes, I started a general question, and answered it with some more specific questions. But at least I didn't bore you with too much more of the A-word. Coming soon, another post that answers the questions remaining from today, by explaining in simple terms what a "risk limiting audit" is, how it is different from the flat-percentage audit discussed today, and, finally, how you can tell for any election you want, whether the election officials were able to test whether election results were garbled by the computers used to create them.

-- EJS

Comment

Comment

Twisted Logic: How Ballots Get Counted in the Real World

Today I'm going to give a flavor of the pretzel logic that applies to the way ballots are counted in the U.S. An alternative title for this post might be "Welcome to the real world of Federal Democracy" because several states have their own different pretzel. You can have 2 marked ballots, each in a different state, but very very similar; but in one case the ballot is completely kosher, while in the other place some votes won't count. The reason, of course, is variations in states' election law and regulation, and in local jurisdiction's practices in applying the law and regs. Probably the classic case, or perhaps the most infamous, is the "straight-party vote". This is a voting method available in some states, where the ballot design contains a "convenience" (exercise for the reader: who it's convenient for, and who benefits) for filling one bubble in order to indicate a vote for several candidates -- all the candidates for a single political party. However, when a voter marks a straight-party bubble on a ballot, they are not finished! In most cases, there are non-partisan elections as well (city council, school board, water district, ...) and ballot measures or referenda. To complete the ballot, a voter must make a mark for these other items on the ballot. Now, the straight-party voting option might be convenient, but it also raises the question of interpretation of subsequent "unnecessary" marks for candidates in partisan offices. These may be construed as meaningful to the voter -- so-called "emphasis votes" or "over-rride votes" -- or as an accident, mistaking a partisan election for a non-partisan one that is not covered by the straight-party option.

Let's look at some of the cases, in a hypothetical election where:

  • the top of the ticket is the U.S. Presidential election including candidates
    • A. Beaucoup of the Peace-and-Freedom party, and
    • B. Yovon of the Conservative-Independent party;
  • there are other Federal and state offices with partisan elections, including a state assembly contest in the middle of the ballot, including candidates
    • C. Bonichose for Peace-and-Freedom, and
    • D. Yamhill for Conservative-Independent.

Now, suppose a voter marks a ballot this way:

  1. She selects the Conservative-Independent option for a straight-party vote.
  2. It seems odd to her, though, to just leave blank the ballot item for U.S. President. Just to make sure her vote counts, she also makes a selection for Yovon for President (even though she has already made a vote for Yovon, by doing the straight-party vote).
  3. The voter then turns over the page of the 2-sided ballot, and makes a mark to select Bonichose for state assembly, in an attempt to "over-ride" the straight-party vote in this one case where the voter does not favor the Conservative-Independent party candidate.

Step 2 is an example of a so-called "emphasis vote" that is not uncommon for top-of-the-ticket contests in "big elections." Step 3 is an example of an "over-ride vote" that can also be interpreted as a oversight where the voter didn't notice the party of the candidates.

But what does this ballot mean? Cases like this require rules for human and machine interpretation of these marks as valid votes, or overvotes. Both an "emphasis" and "over-ride" vote could be construed as an over-vote, a case where the voter voted in a race once, by straight-party, and again in an individual race. This might seem odd, given that in the "emphasis" case, both votes were for the same candidate! And in the "over-ride" case, some might view the vote as quite meaningful. But the meaning is (or should be) established in election law, which is specific to each state, and states with straight-party voting, what may be valid in one state is an over-vote in another, i.e., the emphasis or over-ride vote invalidates the voter's selection for the contest, and no vote for that contest should be recorded for that ballot.

So what we really have, from the point of view of voting system software requirements, is a crazy-quilt of state rules. What is a voting system developer to do? That's another twisted logic story for another day.

-- EJS

Comment

Comment

New Voting System Vendors Enter the Certification Fray

Here are a couple interesting news tidbits to ponder today, showing the breadth and depth of openness to changes to current U.S. voting methods. First, some news from the EAC, the part of the Federal government that runs the program for Federal certification of voting systems -- certification that in many states is effectively a pre-requesite for legal use of a voting system in the state. The EAC announced that there are 2 new vendors who have joined the certification program -- both of them vendors of products that are often called "Internet voting systems".

It's very much worth noting that this does not mean that Scytl and EveryoneCounts have certified products! Far from it, though no doubt some approximate language might be misleading on that point. It just means that the companies have qualified with EAC, and may at some point choose to engage with an EAC-certified test lab to evaluate their products. But it is still interesting that this announcement gives the appearance that Internet voting systems might someday be legal for use in the U.S.

Second, some news from Wisconsin about that state's 5-year mission to figure out what would be a better way to run voting in WI, with pretty much all options on the table to investigate, including a switch to paper ballots solely like MN, all vote by mail like OR, even Internet voting as in a few local election organizations in WA and HI.

It's gratifying to see how serious people are about the fact that current election approaches need serious improvement, and the improvements have to be undertaken carefully and thoughtfully.

EJS

Comment

Comment

Arizona: a New Definition of "Sufficiently" Mis-Counted?

There's a fascinating nugget inside of a fine legal story unfolding in Arizona. I know that not all our readers are thrilled by news of court cases related to election law and election technology, so I'll summarize the legal story in brief, and then get to the nugget. The Arizona Court of Appeals has been working on case that considers this interesting mix:

  • The State's constitutional right of free and fair elections;
  • The recognition that voting systems can mis-count votes;
  • The idea that a miscounted election fails to be fair;
  • The certification for use in AZ of voting system products that had counting errors before;
  • The argument over whether certified systems can be de-certified on constitutional grounds.

For the latest regular press news on the case, see the Arizona Daily Star's article "Appeals court OKs group's challenge to touch-screen voting."

Now let's look at what Judge Philip Hall actually said in the decision: (Thanks to Mark Lindeman for trolling this out). The judge refers to a piece of AZ law, A.R.S. § 16-446(B)(6), that says: "An electronic voting system shall . . . [w]hen properly operated, record correctly and count accurately every vote cast." That "every" is a pretty strong word! Judge Philip Hall wrote:

We conclude that Arizona’s constitutional right to a "free and equal" election is implicated when votes are not properly counted. See A.R.S. §16-446(B)(6). We further conclude that appellants may be entitled to injunctive and/or mandamus relief if they can establish that a significant number of votes cast on the Diebold or Sequoia DRE machines will not be properly recorded or counted.

As election-ologist Joe Hall pointed out, "Of course, I'm left wondering 'what is significant?' here. Sounds like a question we'll hear a lot about in the future of this case!" Indeed we will. Of course neither AZ law nor the legal ruling provides a pre-scription for "significant" but note also that "significant" may be a relative concept, depending on how close a race is. (Thanks again to Mark Lindeman for the point.) We know it's pretty easy for today's voting systems to miscount modest numbers (hundreds) of votes, and escape the notice of humans; and we know that contests that close will occur. Does that mean we can't use these voting systems?

I guess the argument is going to continue, both on "significant" in Hall's decision, and on "properly operated" in the AZ law. And as we saw in Humboldt County and many other places, "operator error" is often in the eye of the beholder.

-- EJS

Comment

Hasty Innovation: the Kind We Don't Need

Today's posting landed in my lap in the form of a note from election tech colleague and Pitt researcher Collin Lynch, as part of a discussion about the role of the Federal government (specifically the Election Assistance Commission, or EAC) in "fostering innovation" in the market for voting systems, and ensuring a "healthy market". Well, of course, we think that there is plenty of room for improvement in voting systems, but there is a big difference between (for example) improved usability or reliability, and innovative changes voting system administration that require election officials to change how they do their job. But Collin hit the nail on the head:

Speaking as a software developer I think the cry for "supporting innovation" comes from two mistaken impressions.

  1. The mistaken impression that voting laws should somehow be concerned with the "health of the market", that is, the EAC's responsibility includes not only the stability of our democracy (difficult enough as it is) but also maintaining "healthy market" for the products of voting system vendors. This is a view that has caught on to some extent in defense spending and other areas but in my view a market exists soley to serve some need and artificially inflating that need, at best, tilts the market to no good end.
  2. The mistaken impression that the technology development process can be "stimulated," especially when the market has real needs for problems to be fixed, problems that appear simple and therefore should be fixed quickly. For voting systems in particular, this is not true.

These are fundamental misunderstandings of how good systems development works and how it can be made to work. In the extreme I have seen purchasers of systems assume that programmers "can just work faster" without considering the costs of quality and stability this brings. This is a view encouraged by the (seemingly) breathless pace of .com development which of course ignores the long lead time many of these overnight successes have and the stability problems that result from alphas being rushed to market.

I couldn't agree more. The last thing that we need in voting systems is encouraging vendors' already-existing bent for innovative bell/whistle features that customers (election officials) didn't ask for, and/or pushing for updated systems to be developed quickly and rushed through the regulatory certification and accreditation process. And while we at the TrustTheVote project are hardly in favor of foot-dragging, we do also recognize that quality, reliability, simplicity, integrity, and many other important qualities, fundamentally start with understanding what the customers need, and making sure that we build to those needs -- and with the attention to quality and reliability that is needed to make it through the regulatory process as well.

-- EJS

Voting Systems: Innovation vs. Adoption?

In several startups and projects, I've seen a curious tension between innovation and adoption -- and nowhere more than TrustTheVote's development of open election technology. Today's specific example comes from a question I received recently from someone we met at GoGaRuCo: what are we doing about more fair voting counting algorithms, esp. approval voting, and using Web technology to help with ballot usability issues, and several other useful innovations that would be valuable changes in the U.S. election system? These and other innovations seemed to my correspondent to be valuable improvements, and in theory I had to agree. But in fact we do not have a long list of method-innovations on our tech roadmap for voting systems, much less the development work we're doing now. Why? Here's the curious thing about innovation and adoption: we have and continue to talk to a lot of election officials -- the people who will be making decisions about the adoption of TrustTheVote technology -- and for voting systems, innovation is really not top of mind. We ask what they want in a new generation of election technology, and what would motivate adoption. Their answers almost uniformly diverge along the same lines of voting systems (caution on innovation) vs. election technology more broadly (enthusiasm for innovation ... a story for another day). For voting systems, the message on must-haves is very clear: deliver voting system technology that correctly implements our current election methods, reliably, open, worthy of trust, with smaller scope for human error, and reduce the workload on our aging shrinking base of volunteer poll workers. Actually I could go on from there, but the point is that innovation in this case is not a driver to adoption, but (if delivered unwisely) a potential hindrance to adoption. But the reverse is also true: innovation in simplicity or transparency can provide some would-be-nice nudges toward adoption, if the must-haves are clearly met.

I've seen plenty of technology start-ups begin with a story interesting to potential early adopters, but with this gotcha: for the customer go beyond proof of concept and gain substantial value, they have to change part of their existing IT practices. And in most cases, they were very very interested, but not actually motivated to adopt, regardless of innovative technology and potential value. That's what voting systems are like, but times 100. Any innovation is clearly in the would-be-nice category at best, because what election tech adopters are hungry for is to be consulted while we're building a new system, to ensure that it actually fits what they do today, and makes their existing job easier. The current vendors have to varying degrees delivered voting systems that they thought their customers should like, and weren't interested in making big changes when their customers found the products hard to use and required changes to the way they run elections.

And you know what? most election technology innovation has been done the same way, to date. Many people have tried their hand at open source election technology (recognizing the trust value of openness) with interesting innovations (which would address some real defects of current election methods), and the same attitude of "I know what should be valuable, and if these government folks could see the value, they will use my stuff."

But at TTV, we're doing it the other way, boring as it may be, optimizing on adoption to get some improvement in voting technology actually fielded and at work openly to start re-building trust in our computerized election systems. And believe me, I'd like TTV to deliver all sorts of cool stuff as well, like alternative voting schemes and counting algorithms, and other innovations for election officials to see, touch, and try, and decide for themselves whether it's promising enough to put some effort into using. And an open, public technology base should really help, too. But for the near term, we're focusing mainly on only one innovation in voting systems: a system that election officials and voters could use and actually like.

-- EJS

Comment

Mixed Bag: Voting System Vendors' Rhetoric on Open Source

The current voting system vendors recently released a paper on election technology and open source. As a pleasant surprise, it is a mixed bag, in that much of the report's rhetoric is  asspecious as previously seen, but there are also signs of the vendors taking steps towards comprehending what the voting system market would be like, with open source digital voting technology.

Comment