Viewing entries tagged
transparency

Comment

Money Shot: What Does a $40M Bet on Scytl Mean?

…not much we think.

Yesterday’s news of Microsoft co-founder billionaire Paul Allen’s investing $40M in the Spanish election technology company Scytl is validation that elections remain a backwater of innovation in the digital age.

But it is not validation that there is a viable commercial market for voting systems of the size typically attracting venture capitalists; the market is dysfunctional and small and governments continue to be without budget.

And the challenges of building a user-friendly secure online voting system that simultaneously protects the anonymity of the ballot is an interesting problem that only an investor of the stature of Mr. Allen can tackle.

We think this illuminates a larger question:

To what extent should the core technology of the most vital aspect of our Democracy be proprietary and black box, rather than publicly owned and transparent?

To us, that is a threshold public policy question, commercial investment viability issues notwithstanding.

To be sure, it is encouraging to see Vulcan Capital and a visionary like Paul Allen invest in voting technology. The challenges facing a successful elections ecosystem are complex and evolving and we will need the collective genius of the tech industry’s brightest to deliver fundamental innovation.

We at the TrustTheVote Project believe voting is a vital component of our nation’s democracy infrastructure and that American voters expect and deserve a voting experience that’s verifiable, accurate, secure and transparent.  Will Scytl be the way to do so?

The Main Thing

The one thing that stood out to us in the various articles on the investment were Scytl’s comments and assertions of their security with international patents on cryptographic protocols.  We’ve been around the space of INFOSEC for a long time and know a lot of really smart people in the crypto field.  So, we’re curious to learn more about their IP innovations.  And yet that assertion is actually a red herring to us.

Here’s the main thing: transacting ballots over the public packet switched network is not simply about security.   Its also about privacy; that is, the secrecy of the ballot.  Here is an immutable maxim about the digital world of security and privacy: there is an inverse relationship, which holds that as security is increased, privacy must be decreased, and vice-verse.  Just consider any airport security experience.  If you want maximum security then you must surrender a bunch of privacy.  This is the main challenge of transacting ballots across the Internet, and why that transaction is so very different from banking online or looking at your medical record.

And then there is the entire issue of infrastructure.  We continue to harp on this, and still wait for a good answer.  If by their own admissions, the Department of Defense, Google, Target, and dozens of others have challenges securifying their own data centers, how exactly can we be certain that a vendor on a cloud-based service model or an in-house data center of a county or State has any better chance of doing so? Security is an arms race.  Consider the news today about Heartbleed alone.

Oh, and please for the sake of credibility can the marketing machinery stop using the phrase “military grade security?”  There is no such thing.  And it has nothing to do with an increase in the  128-bit encryption standard RSA keys to say, 512 or 1024 bit.  128-bit keys are fine and there is nothing military to it (other than the Military uses it).  Here is an interesting article from some years ago on the sufficiency of current crypto and the related marketing arms race.  Saying “military grade” is meaningless hype.  Besides, the security issues run far beyond the transit of data between machines.

In short, there is much the public should demand to understand from anyone’s security assertions, international patents notwithstanding.  And that goes for us too.

The Bottom Line

While we laud Mr. Allen’s investment in what surely is an interesting problem, no one should think for a moment that this signals some sort of commercial viability or tremendous growth market opportunity.  Nor should anyone assume that throwing money at a problem will necessarily fix it (or deliver us from the backwaters of Government elections I.T.).  Nor should we assume that this somehow validates Scytl’s “model” for “security.”

Perhaps more importantly, while we need lots of attention, research, development and experimentation, the bottom line to us is whether the outcome should be a commercial proprietary black-box result or an open transparent publicly owned result… where the “result” as used here refers to the core technology of casting and counting ballots, and not the viable and necessary commercial business of delivering, deploying and servicing that technology.

Comment

Comment

The “VoteStream Files” A Summary

The TrustTheVote Project Core Team has been hard at work on the Alpha version ofVoteStream, our election results reporting technology. They recently wrapped up a prototype phase funded by the Knight Foundation, and then forged ahead a bit, to incorporate data from additional counties, provided by by participating state or local election officials after the official wrap-up.

Along the way, there have been a series of postings here that together tell a story about the VoteStream prototype project. They start with a basic description of the project in Towards Standardized Election Results Data Reporting and Election Results Reload: the Time is Right. Then there was a series of posts about the project’s assumptions about data, about software (part one and part two), and about standards and converters (part one and part two).

Of course, the information wouldn’t be complete without a description of the open-source software prototype itself, provided Not Just Election Night: VoteStream.

Actually the project was as much about data, standards, and tools, as software. On the data front, there is a general introduction to a major part of the project’s work in “data wrangling” in VoteStream: Data-Wrangling of Election Results DataAfter that were more posts on data wrangling, quite deep in the data-head shed — but still important, because each one is about the work required to take real election data and real election result data from disparate counties across the country, and fit into a common data format and common online user experience. The deep data-heads can find quite a bit of detail in three postings about data wrangling, in Ramsey County MN, in Travis County TX, and in Los Angeles CountyCA.

Today, there is a VoteStream project web site with VoteStream itself and the latest set of multi-county election results, but also with some additional explanatory material, including the election results data for each of these counties.  Of course, you can get that from the VoteStream API or data feed, but there may be some interest in the actual source data.  For more on those developments, stay tuned!

Comment

Comment

VoteStream: Data-Wrangling of Election Results Data

If you've read some of the ongoing thread about our VoteStream effort, it's been a lot about data and standards. Today is more of the same, but first with a nod that the software development is going fine, as well. We've come up with a preliminary data model, gotten real results data from Ramsey County, Minnesota, and developed most of the key features in the VoteStream prototype, using the TrustTheVote Project's Election Results Reporting Platform. I'll have plenty to say about the data-wrangling as we move through several different counties' data. But today I want to focus on a key structuring principle that works both for data and for the work that real local election officials (LEOS) do, before an election, during election night, and thereafter.

Put simply, the basic structuring principle is that the election definition comes first, and the election results come later and refer to the election definition. This principle matches the work that LEOs do, using their election management system to define each contest in an upcoming election, define each candidate, and do on. The result of that work is a data set that both serves as an election definition, and also provides the context for the election by defining the jurisdiction in which the election will be held. The jurisdiction is typically a set of electoral districts (e.g. a congressional district, or a city council seat), and a county divided into precincts, each of which votes on a specific set of contests in the election.

Our shorthand term for this dataset is JEDI (jurisdiction election data interchange), which is all the data about an election that an independent system would need to know. Most current voting system products have an Election Management System (EMS) product that can produce a JEDI in a proprietary format, for use in reporting, or ballot counting devices. Several states and localities have already adopted the VIP standard for publishing a similar set of information.

We've adopted the VIP format as the standard that that we'll be using on the TrustTheVote Project. And we're developing a few modest extensions to it, that are needed to represent a full JEDI that meets the needs of VoteStream, or really any system that consumes and displays election results. All extensions are optional and backwards compatible, and we'll be submitting them as suggestions, when we think we got a full set. So far, it's pretty basic: the inclusion of geographic data that describes a precinct's boundaries; a use of existing meta-data to note whether a district is a federal, state, or local district.

So far, this is working well, and we expect to be able to construct a VIP-standard JEDI for each county in our VoteStream project, based on the extant source data that we have. The next step, which may be a bit more hairy, is a similar standard for election results with the detailed information that we want to present via VoteStream.

-- EJS

PS: If you want to look at a small artificial JEDI, it's right here: Arden County, a fictional county that has just 3 precincts, about a dozen districts, and Nov/2012 election. It's short enough that you can page through it and get a feel for what kinds of data are required.

 

Comment

Comment

Election Results Reporting - Assumptions About Standards and Converters (concluded)

Last time, I explained how our VoteStream work depends on the 3rd of 3 assumptions: loosely, that there might be a good way to get election results data (and other related data) out of their current hiding places, and into some useful software, connected by an election data standard that encompasses results data. But what are we actually doing about it? Answer: we are building prototypes of that connection, and the lynchpin is an election data standard that can express everything about the information that VoteStream needs. We've found that the VIP format is an existing, widely adopted standard that provides a good starting point. More details on that later, but for now the key words are "converters" and "connectors". We're developing technology that proves the concept that anyone with basic data modeling and software development skills can create a connector, or data converter, that transforms election data (including but most certainly not limited to vote counts) from one of a variety of existing formats, to the format of the election data standard.

And this is the central concept to prove -- because as we've been saying in various ways for some time, the data exists but is locked up in a variety of legacy and/or proprietary formats. These existing formats differ from one another quite a bit, and contain varying amounts of information beyond basic vote counts. There is good reason to be skeptical, to suppose that is a hard problem to take these different shapes and sizes of square data pegs (and pentagonal, octahedral, and many other shaped pegs!) and put them in a single round hole.

But what we're learning -- and the jury is still out, promising as our experience is so far -- that all these existing data sets have basically similar elements, that correspond to a single standard, and that it's not hard to develop prototype software that uses those correspondence to convert to a single format. We'll get a better understanding of the tricky bits, as we go along making 3 or 4 prototype converters.

Much of this feasibility rests on a structuring principle that we've adopted, which runs parallel to the existing data standard that we've adopted. Much more on that principle, the standard, its evolution, and so on … yet to come. As we get more experience with data-wrangling and converter-creation, there will certainly be a lot more to say.

-- EJS

Comment

1 Comment

Comments Prepared for Tonight's Elections Technology Roundtable

This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.

We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems.  These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.

Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project

We appreciate an opportunity to participate in this important discussion.  We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.

1. Certification

For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point.  Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model.  This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.

We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system.  There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.

We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California.  Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.

2. Standards

2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component.  Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner.  With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."

The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem.  And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote.  They deserve congratulations for reaching this imperative state of data standards détente.  It's not finished, but the effort and momentum is there.

So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.

3. Commercial Revitalization

Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry.  Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation.  Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.

One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery.  To that point, let us offer an important clarification and dispel a misconception about rolling your own.  This does not mean that elections officials are about to be left to self-vend.  And by that we mean self-construct and support their open, standard, commodity voting system components.  A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type.  Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software.  Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.

Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation.  To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.

In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.

 

EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California.  We don't read SB-360 that way at all.  We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest.  In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed.  We simply need to modernize how certification works to enable this kind of innovation.  We think our comments today bear that out.

Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360.  In sum and substance the agenda is to discuss:

  1. Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
  2. Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
  3. Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.

Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions.  We think this allegation is also misinformed for two reasons: [1] assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and [2] notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned.  From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away.  And indeed, this is what we've witnessed over the years in other jurisdictions.

Onward.

1 Comment

Comment

Not Just Election Night: VoteStream

A rose by any other name would smell as sweet, but if you want people to understand what a software package does, it needs a good name. In our Election Night Reporting System project, we've learned that it's not just about election night, and it's not just about reporting either. Even before election night, a system can convey a great deal of information about an upcoming election and the places and people that will be voting in it. To take a simple example: we've learned that in some jurisdictions, a wealth of voter registration information is available and ready to be reported alongside election results data that will start streaming in on election night from precincts and counties all over.

It's not a "system" either. The technology that we've been building can be used to build a variety of useful systems. It's better perhaps to think of it as a platform for "Election Result Reporting" systems of various kinds. Perhaps the simplest and most useful system to build on this platform is a system that election officials can load with data in a standard format, and which then publishes the aggregated data as an "election results and participation data feed". No web pages, no API, just a data feed, like the widely used (in election land) data feed technique using the Voting Information Project and their data format.

In fact, one of the recent lessons learned, is that the VIP data standard is a really good candidate for an election data standard as well, including:

  • election definitions (it is that already),
  • election results that reference an election definition (needs a little work to get there), and
  • election participation data (a modest extension to election results).

As a result (no pun intended) we're starting work on defining requirements for how to use VIP format in our prototype of the "Election Results Reporting Platform" (ERRP).

But the prototype needs to be a lot more than the ERRP software packaged in to a data feed. It needs to also provide a web services API to the data, and it needs to have a web user interface for ordinary people to use. So we've decided to give the prototype a better name, which for now is "VoteStream".

Our VoteStream prototype shows how ERRP technology can be packaged to create a system that's operated by local election officials (LEOs) to publish election results -- including but not limited to publishing unofficial results data on election night, as the precincts report in. Then, later, the LEOs can expand the data beyond vote counts that say who won or lost. That timely access on election night is important, but just as important is the additional information that can be added during the work in which the total story on election results is put together -- and even more added data after the completion of that "canvass" process.

That's VoteStream. Some other simpler ERRP-based system might be different: perhaps VoteFeed, operated by a state elections organization to collate LEO's data and publish to data hounds, but not to the general public and their browsers. Who knows? We don't, not yet anyhow. We're building the platform (ERRP), and building a prototype (VoteStream) of an LEO-oriented system on the platform.

The obvious next question is: what is all that additional data beyond the winner/loser numbers on election night? We're still learning the answers to that question, and will share more as we go along.

-- EJS

 

Comment

Comment

Election Results Reporting - Assumptions About Software (concluded)

Today, I'll be concluding my description of one area of assumptions in our Election Night Reporting System project -- our assumptions about software. In my last post, I said that our assumptions about software were based on two things: our assumptions about election results data (which I described previously), and the results of the previous, design-centric phase of our ENRS work. Those results consist of two seemingly disparate parts:

  1. the UX design itself, that enables people to ask ENRS questions, and
  2. a web service interface definition, that enable to software to ask ENRS questions.

In case (1), the answer is web pages delivered by a web app. In case (2) the answers are data delivered via an application programming interface (API). 

Exhibit A is our ENRS design website http://design.enrs.trustthevote.org which shows a preliminary UX design for a map-based visualization and navigation of the election results data for the November 2010 election in Travis County, Texas. The basic idea was to present a modest but useful variety way to slice and dice the data, that would be meaningful to ordinary voters and observers of elections. The options include slicing the data at the county level, or the individual precinct level, or in-between, and to filter by one of various different kinds of election results or contests or referenda. Though preliminary, the UX design well received, and it's the basis for current work to do a more complete UX that also provides features for power users (data-heads) without impacting the view of ordinary observers. 

Exhibit B is the application programming interface (API), or for now just one example of it:

http://design.enrs.trustthevote.org/resources/precincts/precinct324/contests/governor

That does not look like a very exciting web page (click it now if you don't believe me!), and a full answer of "what's an API" can wait for another day.

ENRS-API-example

But the point here is that the URL is a way for software to request a very specific slice through a large set of data, and get it in a software-centric digestable way. The URL (which you can see above in the address bar) is the question, and the answer is what you above as the page view. Now, imagine something like your favorite NBA or NFL scoreboard app for your phone, periodically getting updates on how your favorite candidate is doing, and alerting you in a similar way that you get alerts about your favorite sports team. That app asks questions of ENRS, and gets answers, in exactly the way you see above, but of course it is all "under the hood" of the app's user interface.

So, finally, we can re-state the software assumption of our ENRS project:

  • if one can get sufficiently rich election data, unlocked from the source, in a standard format,
  • then one can feasibly develop a lightweight modern cloud-oriented web app, including a web service, that enables election officials to both:
    • help ordinary people understand complex election results data, and
    • help independent software navigate that data, and present it to the public in many ways, far beyond the responsibilities of election officials.

We're trying to prove that assumption, by developing the software -- in our usual open source methodology of course -- in a way that (we hope) provides a model for any tech organization to similarly leverage the same data formats and APIs.

-- EJS

Comment

Comment

Election Results Reporting - Assumptions About Software

Today I'm continuing with the second of a 3-part series about what we at the TrustTheVote Project are hoping to prove in our Election Night Reporting System project.  As I wrote earlier, we have assumptions in three areas, one of which is software. I'll try to put into a nutshell a question that we're working on an answer to:

If you were able to get the raw election results data available in a wonderful format, what types of useful Apps and services could you develop?

OK, that was not exactly the shortest question, and in order to understand what "wonderful format" means, you'd have to read my previous post on Assumptions About Data. But instead, maybe you'd like to take a minute to look at some of the work from our previous phase of ENRS work, where we focused on two seemingly unrelated aspects of ENRS technology:

  1. The user experience (UX) of a Web application that local election officials could provide to help ordinary folks visualize and navigate complex election results information.
  2. A web services API that would enable other folk's systems (not elections officials) to receive and use the data in a manner that's sufficiently flexible for a variety other services ranging from professional data mining to handy mobile apps.

They're related because the end results embodied a set of assumptions about available data.

Now we're seeing that this type of data is available, and we're trying to prove with software prototyping that many people (not just elections organizations, and not just the TrustTheVote Project) could do cool things with that data.

There's a bit more to say -- or rather, to show and tell -- that should fit in one post, so I'll conclude next time.

-- EJS

PS: Oh there is one more small thing: we've had a bit of an "Ah-ha" here in the Core Team, prodded by our peeps on the Project Outreach team.  This data and the apps and services that can leverage that data for all kinds of purposes has use far beyond the night of an election.  And we mentioned that once before, but the ah-ha is that what we're working on is not just about election night results... its about all kinds of election results reporting, any time, any where.  And that means ENRS is really not that good of a code name or acronym.  Watch as "ENRS" morphs into "E2RP" for our internal project name -- Election Results Reporting Platform.

Comment

Comment

Election Results Reporting - Assumptions about Data

In a previous post I said that our ENRS project is basically an effort to investigate a set of assumptions about how the reporting of election results can be transformed with innovations right at the source -- in the hands of the local election officials who manage the elections that create the data. One of those assumptions is that we -- and I am talking about election technologists in a broad community, not only the TrustTheVote Project -- can make election data standards that are important in five ways:

  1. Flexible to encompass data coming from a variety of elections organizations nationwide.
  2. Structured to accommodate the raw source data from a variety of legacy and/or proprietary systems, feasibly translated or converted into a standard, common data format.
  3. Able to simply express the most basic results data: how many votes each candidate received.
  4. Able to express more than just winners and losers data, but nearly all of the relevant information that election officials currently have but don't widely publish (i.e., data on participation and performance).
  5. Flexible to express detailed breakdowns of raw data, into precinct-level data views, including all the relevant information beyond winners and losers.

Hmm. It took a bunch of words to spell that out, and for everyone but election geeks it may look daunting.  To simplify, here are three important things we're doing to prove out those assumptions to some extent.

  1. We're collecting real election results data from a single election (November, 2012) from a number of different jurisdictions across the country, together with supporting information about election jurisdictions' structure, geospatial data, registration, participation, and more.
  2. We're learning about the underlying structure of this data in its native form, by collaborating with the local elections organizations that know it best.
  3. We're normalizing the data, rendering it in a standard data format, and using software to crunch that data, in order to present it in a digestible way to regular folks who aren't "data geeks."

And all of that comprises one set of assumptions we're working on; that is, we're assuming all of these activities are feasible and can bear fruit in an exploratory project.  Steady as she goes; so far, so good.

-- EJS

Comment

Comment

Towards Standardized Election Results Data Reporting

Now that we are a ways into our "Election Night Reporting System" project, we want to start sharing some of what we are learning.  We had talked about a dedicated Wiki or some such, but our time was better spent digging into the assignment graciously supported by the Knight Foundation Prototype Fund.  Perhaps the best place to start is a summary of what we've been saying within the ENRS team, about what we're trying to accomplish. First, we're toying with this silly internal project code name, "ENRS" and we don't expect it to hang around forever. Our biggest grip is that what we're trying to do extends way beyond the night of elections, but more about that later.

Our ENRS project is based on a few assumptions, or perhaps one could say some hypotheses that we hope to prove. "Prove" is probably a strong word. It might better to say that we expect that our assumptions will be valid, but with practical limitations that we'll discover.

The assumptions are fundamentally about three related topics:

  1. The nature and detail of election results data;
  2. The types of software and services that one could build to leverage that data for public transparency; and
  3. Perhaps most critically, the ability for data and software to interact in a standard way that could be adopted broadly.

As we go along in the project, we hope to say more about the assumptions in each of these areas.

But it is the goal of feasible broad adoption of standards that is really the most important part. There's a huge amount of latent value (in terms of transparency and accountability) to be had from aggregating and analyzing a huge amount of election results data. But most of that data is effectively locked up, at present, in thousands of little lockboxes of proprietary and/or legacy data formats.

It's not as though most local election officials -- the folks who are the source of election results data, as they conduct elections and the process of tallying ballots -- want to keep the data locked up, nor to impede others' activities in aggregating results data across counties and states, and analyzing it. Rather, most local election officials just don't have the means to "get the data out" in way that supports such activities.

We believe that the time is right to create the technology to do just that, and enable election officials to use the technology quickly and easily. And this prototype phase of ENRS is the beginning.

Lastly, we have many people to thank, starting with Chris Barr and the Knight Foundation for its grant to support this prototype project. Further, the current work is based on a previous design phase. Our thanks to our interactive design team led by DDO, and the Travis County, TX Elections Team who provided valuable input and feedback during that earlier phase of work, without which the current project wouldn't be possible.

-- EJS

Comment

Comment

Free at Last: We Earn Our 501(c)(3) Tax Exempt Status

I am pleased to announce to our readers that the IRS has granted our 7-year old organization full unbridled tax exempt status under section 501(c)(3) of the Internal Revenue Code as a public charity.  This brings to a close an application review that consumed over 6 years—one of the longest for a public benefits non-profit organization.  Our Chief Development Officer, Gregory Miller has already offered his insight this morning, but I want to offer a couple of thoughts from my view point (which I know he shares). By now, you may have seen the WIRED Magazine article that was published this morning.  Others here will surely offer some additional comment of their own in separate posts.  But it does set the context for my brief remarks here.

First, to be sure,  this is a milestone in our existence because the Foundation’s fund raising efforts and corresponding work on behalf of elections officials and their jurisdictions nationwide has been largely on hold since we filed our original IRS Form 1023 application back in February 2007.

The Foundation has managed to remain active through what self-funding we could afford, and through generous grants from individuals and collaborating organizations that continued to support the “TrustTheVote™ Project” despite our "pending" status.

A heartfelt "thank you" to Mitch Kapor, Heather Smith and Rock the Vote, Alec Totic, Matt Mullenweg, Pito Salas, the Gregory Miller family and the E. John Sebes family (to name a few of the those who so believed in us early on to offer their generous support).  The same thanks goes to those who wished to remain anonymous in their support.

In addition to our being set free to move full speed ahead on our charter, I think this is interesting news for another reason: this project, which has a clear charitable cause with a compelling public benefit, was caught up in an IRS review perhaps mostly for having the wrong words in its corporate name.

Our case became entangled in the so-called “Bolo-Gate” scandal at the IRS Exempt Division.  And we unintentionally became a poster child for be-on-the-lookout reviews as such applied to entities involved in open source technology.

In sum and substance, our case required 6 years and 4 months for the IRS to decide.  The Service ultimately dragged us into our final administrative remedy, the "conference-of-right" we participated in last November, following their "intent to deny" letter in March of last year.  Then it took the IRS another 220 days to finally decide the case, albeit in our favor, but not before we had a] filed close to 260 pages of interrogatory responses, of which 182 were under affidavits; b] developed nearly 1,600 pages of total content; and c] ran up a total bill for legal and accounting fees over those years in excess of $100,000.

We’ve definitely learned some things about how to handle a tax exempt application process for an organization trying to provide public benefit in the form of software technology, although frankly, we have no intentions or interest in ever preparing another.

But there is a story yet to be told about what it took for us to achieve our 501(c)(3) standing—a status that every single attorney, CPA, or tax expert who reviewed our case over the years believed we deserved.   That noted, we are very grateful to our outside tax counsel team at Caplin Drysdale led by Marc Owen, who helped us press our case.

I am also deeply relieved that we need not raise a legal defense fund, but instead can finally start turning dollars towards the real mission: developing accurate, transparent, verifiable, and more secure elections technology for public benefit rather than commercial gain.  Its not lost on us, nor should it be on you, how we could've spent the money we need to pay to our lawyers and accountants on advancing the substantive cause of the TrustTheVote Project.

So, now its time to focus ahead, get to work, and raise awareness of the TrustTheVote Project and the improvements it can bring to public elections.

We're a legitimate legally recognized 501(c)(3) tax exempt public benefits corporation.  And with that you will begin to see marked changes in our web sites, our activities.  Stay tuned.  We're still happily reeling a bit from the result, but wrapping our heads around what we need to do now that we have the designation we fought for 6 years to have in order to fund the work our beneficiaries -- elections jurisdictions nationwide -- so deserve.

Please join me in acknowledging this major step and consider supporting our work going forward.  After all, now it really can be tax deductible (see your accountant and lawyer for details).

Best Regards, Christine M. Santoro Secretary, General Counsel

Comment

Comment

The 2013 Annual Elections Verification Conference Opens Tonight

If its Wednesday 13.March it must be Atlanta.  And that means the opening evening reception for the Elections Verification Network's 2013 Annual Conference.  We're high on this gathering of elections officials, experts, academicians and advocates because it represents a unique interdisciplinary collaboration of technologists, policy wonks and legal experts, and even politicians all with a common goal: trustworthy elections. The OSDV Foundation is proud to be a major sponsor of this event.  We do so because it is precisely these kinds of forums where discussions about innovation in HOW America votes take place and it represents a rich opportunity for collaboration, debate, education, and sharing.  We always learn much and share our own research and development efforts as directed by our stakeholders -- those State and local elections officials who are the beneficiaries of our charitable work to bring increased accuracy, transparency, verification, and security (i.e., the 4 pillars of trustworthiness) to elections technology reform through education, research and development for elections technology innovation.

Below are my opening remarks to be delivered this evening or tomorrow morning, at the pleasure of the Planning Committee depending on how they slot the major sponsors opportunities to address the attendees.  We believe there are 3 points we wanted to get across in opening remarks: [1] why we support the EVN; [2] why there is a growing energy around increased election verification efforts, and [3] how the EVN can drive that movement forward.....

Greetings Attendees!

On behalf of the EVN Planning Committee and the Open Source Digital Voting Foundation I want to welcome everyone to the 2013 Elections Verification Network Annual Conference.  As a major conference supporter, the Planning Committee asked if I, on behalf of the OSDV Foundation, would take 3 minutes to share 3 things with you:

  • 1st, why the Foundation decided to help underwrite this Conference;
  • 2nd, why we believe there is a growing energy and excitement around election verification; and
  • 3rd, how the EVN can bring significant value to this growing movement

So, we decided to make a major commitment to underwriting and participating in this conference for two reasons:

  1. We want to strengthen the work of this diverse group of stakeholders and do all that we can to fortify this gathering to make it the premier event of its kind; and
  2. The work of the EVN is vital to our own mission because there are 4 pillars to trustworthy elections: Accuracy, Transparency, Verification, and Security, and the goals and objectives of these four elements require enormous input from all stakeholders.  The time to raise awareness, increase visibility, and catalyze participation is now, more than ever.  Which leads to point about the movement.

We believe the new energy and excitement being felt around election verification is due primarily to 4 developments, which when viewed in the aggregate, illustrates an emerging movement.  Let’s consider them quickly:

  1. First, we’re witnessing an increasing number of elections officials considering “forklift upgrades” in their elections systems, which are driving public-government partnerships to explore and ideate on real innovation – the Travis County Star Project and the LA County’s VSAP come to mind as two showcase examples, which are, in turn, catalyzing downstream activities in smaller jurisdictions;
  2. The FOCE conference in CA, backed by the James Irvine Foundation was a public coming out of sorts to convene technologists, policy experts, and advocates in a collaborative fashion;
  3. The recent NIST Conferences have also raised the profile as a convener of all stakeholders in an interdisciplinary fashion; and finally,
  4. The President’s recent SOTU speech and the resulting Bauer-Ginsberg Commission arguably will provide the highest level of visibility to date on the topic of improving access to voting.  And this plays into EVN’s goals and objectives for elections verification.  You see, while on its face the visible driver is fair access to the ballot, the underlying aspect soon to become visible is the reliability, security, and verifiability of the processes that make fair access possible.  And that leads to my final point this morning:

The EVN can bring significant value to this increased energy, excitement, and resulting movement if we can catalyze a cross pollination of ideas and rapidly increase awareness across the country.  In fact, we spend lots of time talking amongst ourselves.  It’s time to spread the word.  This is critical because while elections are highly decentralized, there are common principles that must be woven into the fabric of every process in every jurisdiction.  That said, we think spreading the word requires 3 objectives:

  1. Maintaining intellectual honesty when discussing the complicated cocktail of technology, policy, and politics;
  2. Sustaining a balanced approach of guarded optimism with an embracing of the potential for innovation; and
  3. Encouraging a breadth of problem awareness, possible solutions, and pragmatism in their application, because one size will never fit all.

So, welcome again, and lets make the 2013 EVN Conference a change agent for raising awareness, increasing knowledge, and catalyzing a nationwide movement to adopt the agenda of elections verification.

Thanks again, and best wishes for a productive couple of days.

Comment

Comment

Poster and Slides from OSDV at NIST Workshop on Common Data Format Standards

Many thanks to the engaged audience for OSDVer Anne O'Flaherty's presentation yesterday at National Institute of Standards and Technology (NIST), which hosted a workshop on Common Data Formats (CDFs) and standards for data interchange of election data. We had plenty to say, based on our 2012 work with Virginia State Board of Elections (SBE), because that collaboration depends critically on CDFs. Anne and colleagues did a rather surprising amount of data wrangling over many weeks to get things all hooked up right, and the lessons learned are important for continuing work in the standards body, both NIST and the IEEE group working on CDF standards.

As requested by the attendees, here are online versions of the poster and the slides for the presentation "Bringing Transparency to Voter Registration and Absentee Voting."

BringingTransparencyToVoterRegistrationAndAbsenteeVotingNISTfeb2012

OSDV_Poster_NIST_Feb_2013

Comment

Comment

The Root Cause -- Long Lines, Late Ballot Counts, and Election Dysfunction in General

I've spent a fair bit of time over the last few days digesting a broad range of media responses to last week's election's operation, much it reaction to President Obama's "we've got to fix that" comment in his acceptance speech. There's a lot of complaining about the long lines, for example, demands for explanation of them, or ideas for preventing them in te future -- and similar for the difficulty that some states and counties face for finishing the process of counting the ballots. It's a healthy discussion for the most part, but one that makes me sad because it mostly misses the main point: the root cause of most election dysfunction. I can explain that briefly from my viewpoint, and back that up with several recent events. The plain unvarnished truth is that U.S. local election officials, taken all together as the collective group that operates U.S. federal and state elections, simply do not have the resources and infrastructure to conduct elections that

  • have large turnout and close margins, preceded by much voter registration activity;
  • are performed with transparency that supports public trust in the integrity of the election being accessible, fair, and accurate.

There are longstanding gaps in the resources needed, ranging from ongoing budget for sufficient staff, to inadequate technology for election administration, voting, counting, and reporting.

Of course in any given election, there are local elections operations that proceed smoothly, with adequate resources and physical and technical infrastructure. But we've seen again and again, that in every "big" election, there is a shifting cast of distressed states or localities (and a few regulars), where adminstrative snafus, technology glitches, resource limits, and other factors get magnified as a result of high participation and close margins. Recent remarks by Broward County, FL, election officials -- among those with the most experience in these matters -- really crystalized it for me. When asked about the cause of the long lines, their response (my paraphrase) is that when the election is important, people are very interested in the election, and show up in large numbers to vote.

That may sound like a trivial or obvious response, but consider it just a moment more. Another way of saying it is that their resources, infrastructure, and practices have been designed to be sufficient only for the majority of elections that have less than 50% turnout and few if any state or federal contests that are close. When those "normal parameters" are exceeded, the whole machinery of elections starts grinding down to a snail's pace. The result: an election that is, or appears to be, not what we expect in terms of being visibily fair, accessible, accurate, and therefore trustworthy.

In other words, we just haven't given our thousands of localities of election officials what they really need to collectively conduct a larger-than-usual, hotly contested election, with the excellence that they are required to deliver, but are not able to. Election excellence is, as much as any of several other important factors, a matter of resources and infrastructure. If we could somehow fill this gap in infrastructure, and provide sufficient funding and staff to use it, then there would be enormous public benefits: elections that are high-integrity and demonstrably trustworthy, despite being large-scale and close.

That's my opinion anyway, but let me try to back it up with some specific and recent observations about specific parts of the infrastructure gap, and then how each might be bridged.

  • One type of infrastructure is voter record systems. This year in Ohio, the state voter record system poorly served many LEOs who searched for but didn't find many many registered absentee voters to whom they should have mailed absentee ballots. The result was a quarter million voters forced into provisional voting -- where unlike casting a ballot in a polling place, there is no guarantee that the ballot will be counted -- and many long days of effort for LEOs to sort through them all. If the early, absentee, and election night presidential voting in Ohio had been closer, we would still be waiting to hear from Ohio.
  • Another type of infrastucture is pollbooks -- both paper, and electronic -- and the systems that prepare them for an election. As usual in any big election, we have lots of media anecdotes about people who had been on these voter rolls, but weren't on election day (that includes me by the way). Every one of these instances slows down the line, causes provisional voting (which also takes extra time compared to regular voting), and contributes to long lines.
  • Then there are the voting machines. For the set of places where voting depends on electronic voting machines, there are always some places where the machines don't start, take too long get started, break, or don't work right. By now you've probably seen the viral youtube video of the touch screen that just wouldn't record the right vote. That's just emblematic of the larger situation of unreliable, aging voting systems, used by LEOs who are stuck with what they've got, and no funding to try to get anything better. The result: late poll opening, insufficient machines, long lines.
  • And for some types of voting machines -- those that are completely paperless -- there is simply no way to do a recount, if one is required.
  • In other places, paper ballots and optical scanners are the norm, but they have problems too. This year in Florida, some ballots were huge! six pages in many cases. The older scanning machines physically couldn't handle the increased volume. That's bad but not terrible; at least people can vote. However, there are still integrity requirements -- for example, the voters needs to put their unscanned ballots in an emergency ballot box, rather than entrust a marked ballot to a poll worker. But those crazy huge ballots, combined with the frequent scanner malfunction, created overstuffed full emergency ballot boxes, and poll workers trying to improvise a way store them. Result: more delays in the time each voter required, and a real threat to the secret ballot and to every ballot being counted.

Really, I could go on for more and more of the infrastructure elements that in this election had many examples of dysfunction. But I expect that you've seen plenty already. But why, you ask, why is the infrastructure so inadequate to the task of a big, complicated, close election conducted with accessibility, accuracy, security, transparency, and earning public trust? Isn't there something better?

The sad answer, for the most part, is not at present. Thought leaders among local election officials -- in Los Angeles and Austin just to name a couple -- are on record that current voting system offerings just don't meet their needs. And the vendors of these systems don't have the ability to innovate and meet those needs. The vendors are struggling to keep up a decent business, and don't see the type of large market with ample budgets that would be a business justification for new systems and the burdensome regulatory process to get them to market.

In other cases, most notably with voter records systems, there simply aren't products anymore, and many localities and states are stuck with expensive-to-maintain legacy systems that were built years ago by big system integrators, that have no flexibility to adapt to changes in election administration, law, or regulation, and that are too expensive to replace.

So much complaining! Can't we do anything about it? Yes. Every one of those and other parts of election infrastructure breakdowns or gaps can be improved, and could, if taken together, provide immense public benefit if state and local election officials could use those improvements. But where can they come from? Especially if the current market hasn't provided, despite a decade of efforts and much federal funding? Longtime readers know the answer: by election technology development that is outside of the current market, breaks the mold, and leverages recent changes in information technology, and the business of information technology. Our blog in the coming weeks will have several examples of what we've done to help, and what we're planning next.

But for today, let me be brief with one example, and details on it later. We've worked with state of Virginia to build one part of new infrastructure for voter registration, and voter record lookup, and reporting, that meets existing needs and offers needed additions that the older systems don't have. The VA state board of elections (SBE) doesn't pay any licensing fees to use this technology -- that's part of what open source is about. The don't have to acquire the software and deploy it in their datacenter, and pay additional (and expensive) fees to their legacy datacenter operator, a government systems integrator. They don't have to go back to the vendor of the old system to pay for expensive but small and important upgrades in functionality to meet new election laws or regulations.

Instead, the SBE contracts with a cloud services provider, who can -- for a fraction of the costs in a legacy in-house government datacenter operated by a GSI -- obtain the open-source software, integrate it with the hosting provider's standard hosting systems, test, deploy, operate, and monitor the system. And the SBE can also contract with anyone they choose, to create new extensions to the system, with competition for who can provide the best service to create them. The public benefits because people anywhere and anytime can check if they are registered to vote, or should get an absentee ballot, and not wait like in Ohio until election day to find out that they are one in a quarter million people with a problem.

And then the finale, of course, is that other states can also adopt this new voter records public portal, by doing a similar engagement with that same cloud hosting provider, or any other provider of their choice that supports similar cloud technology. Virginia's investment in this new election technology is fine for Virginia, but can also be leveraged by other states and localities.

After many months of work on this and other new election technologies put into practical use, we have many more stories to tell, and more detail to provide. But I think that if you follow along and see the steps so far, you may just see a path towards these election infrastructure gaps getting bridged, and flexibly enough to stay bridged. It's not a short path, but the benefits could be great: elections where LEOs have the infrastructure to work with excellence in demanding situations, and can tangibly show the public that they can trust the election as having been accessible to all who are eligible to vote, performed with integrity, and yielding an accurate result.

-- EJS

Comment

1 Comment

TrustTheVote on HuffPost

We'll be live on HuffPost online today at 8pm eastern:

  • @HuffPostLive http://huff.lv/Uhokgr or live.huffingtonpost.com

and I thought we should share our talking points for the question:

  • How do you compare old-school paper ballots vs. e-voting?

I thought the answers would be particularly relevant to today's NYT editorial on the election which concluded with this quote:

That the race came down to a relatively small number of voters in a relatively small number of states did not speak well for a national election apparatus that is so dependent on badly engineered and badly managed voting systems around the country. The delays and breakdowns in voting machines were inexcusable.

I don't disagree, and indeed would extend from flaky voting machines to election technology in general, including clunky voter record systems that lead to many of the lines and delays in polling places.

So the HuffPost question is apposite to that point, but still not quite right. It's not an either/or but rather a comparison of:

  • old-school paper ballots and 19th century election fraud;
  • old-school machine voting and 20th century lost ballots;
  • old-school combo system of paper ballots machine counting and botched re-counting;
  • new-fangled machine voting (e-voting) and 21st century lost ballots;
  • newer combo system of paper ballots and machine counting (not voting).

Here are the talking points:

  • Old-school paper ballots where cast by hand and counted by hand, where the counters could change the ballot, for example a candidate Smith partisan could invalidate a vote for Jones by adding a mark for Smith.
  • These and other paper ballot frauds in the 19th century drove adoption in the early 20th century of machine voting, on the big clunky "level machines" with the satisfying ka-thunk-swish of the level recording the votes and opening the privacy curtain.
  • However, big problem with machine voting -- no ballots! Once that lever is pulled, all that's left is a bunch of dials and counters on the backside being increased by one. In a close election that requires a re-count, there are no ballots to examine! Instead the best you could do is re-read each machine's totals and re-run the process of adding them all up in case there was an arithmetic error.
  • Also, the dials themselves, after election day but before a recount, were a tempting target for twiddling, for the types of bad actors who in the 19th century fiddled with ballot boxes.
  • Later in the 20th century, we saw a move to a combo system of paper ballots and machine counting, with the intent that the machine counts were more accurate than human counts and more resistant to human meddling, yet the paper ballots remaining for recounts, and for audits of the accuracyof machinery of counting.
  • Problem: these were the punch ballots of the infamous hanging chad.
  • Early 21st century: run from hanging chad to electronic voting machines.
  • Problem: no ballots! Same as before, only this time, the machins are smaller and much easier to fiddle with. That's "e-voting" but wihout ballots.
  • Since then, a flimsy paper record was bolted on to most of these systems to support recount and audit.
  • But the trend has been to go back to the combo system, this time with durable paper ballots and optical-scanning machinery for counting.
  • Is that e-voting? well, it is certainly computerized counting. And the next wave is computer-assisted marking of paper ballots -- particularly for voters with disabilities -- but with these machine-created ballots counted the same as hand-marked ballots.

Bottom line: whether or not you call it e-voting, so long as there are both computers and human-countable durable paper ballots involved, the combo provides the best assurance that niether humans nor computers are mis-counting or interfering with voters casting ballots.

-- EJS

PS: If you catch us on HP online, please let us know what you thought!

1 Comment

1 Comment

How I Learned to Stop Worrying and Love the Electoral College

OK, so I don't actually love the Electoral College -- I think nobody does -- but at least this time we were spared the experience of a Strangelove-esque near-meltdown with a one-state margin in the Electoral College, and severe electoral dysfunction in key states. But as usual this time every four years, I'm seeing a lot of stuff about how bad the electoral college is, and why it should be changed. I'm personally sympathetic to some -- I wish more than 54% of my California county turned out to vote on important ballot measures on taxes even if the presidential election were not in doubt -- but disagree with others. Perhaps my biggest disagreement is the position of folks of the "I don't trust election officials" stripe.

Here's my summary of that position: State and local election officials want to use their powers to swing an election to their own political party. In swing states where a few thousand votes represent that margin of victory for the state's electoral college votes, it is quite feasible to "steal" the election by a combination of administrative actions like voter-roll purges, precinct-pollbook errors, uncounted provisional ballots, and selective rejection of absentee ballots -- to say nothing of voter ID laws and intimidation by poll workers.

As a result -- this theory goes -- the actions of a handuful of election officials can deliver a key state and decide the presidential election. And hence we should ditch the electoral college in favor of National Popular Vote. I disagree on two grounds. First of all, I don't actually believe that election officials often consciously abuse their powers for direct political gain, despite the fact that legislators certainly do, and even (at least this year in PA) avowedly so during voter ID laws passed with the belief of a selective dis-enfranchising effect.

But leaving aside beliefs about what goes on in the minds of election officials, I also don't believe that National Popular Vote (NPV) would be an improvement over the EC, in terms of foiling hypothetically nefarious election officials (HNEOs). Here's why. With EC, an HNEO can only "deliver" one state; that state's EC votes are the limit of their scope for how much of the election they can effect. Yes, HNEOs in swing states have an advantage as compared to non-swing states. But the EC acts a firewall that limits the effects of election dysfunction (accidental or intentional) within one state.

By contrast with NPV, HNEOs can adopt a different strategy. Instead of suppressing voters (reducing the state's effect on a national election), they could inflate votes. If you really believe in HNEOs, then their powers also include mis-reporting vote totals, for example, with tabulation errors in double-counting lopsided precincts (which we seen happen accidentally in real life). Just as an HNEO in the EC system can suppress thousands of votes to deliver a single state in close state election, HNEOs in the NPV system can inflate tens of thousands of votes to deliver a close national election. HNEOs in large states have the advantage there.

My point is that every system can be gamed, and switching from one system to another doesn't change the fundamental game-ability. I'm not in charge of decisions about EC Vs. NPV, thankfully, but I and my colleagues in both election technology and election integrity do have the ability to do something that's helpful in either system, or any -- promote election operation transparency at a detailed level so that all can see what EOs are doing, and decide on the basis of facts-and-figures whether there is any nefariousness in the management of voter rolls or the disposition of provisional or absentee ballots. That's where the technology we're developing for detailed logging and data mining, can act as that proverbial dis-infectant sunshine to counter the suspicions of the HNEO-worriers.

-- EJS

1 Comment

Comment

Election Tech "R" Us - and Interesting Related IP News

Good Evening-- On this election night, I can't resist pointing out the irony of the USPTO's news of the day for Election Day earlier: "Patenting Your Vote," a nice little article about patents on voting technology.  It's also a nice complement to our recent posting on the other form of intellectual property protection on election technology -- trade secrets.  In fact, there is some interesting news of the day about how intellectual property protections won't (as some feared) inhibit the use of election technology in Florida.

For recent readers, let's be clear again about what election technology is, and our mission. Election technology is any form of computing -- "software 'n' stuff" -- used by election officials to carry out their administrative duties (like voter registration databases), or by voters to cast a ballot (like an opscan machine for recording votes off of a paper ballot), or by election officials to prepare for an election (like defining ballots), or to conduct an election (like scanning absentee ballots), or to inform the public (like election results reporting). That covers a lot of ground for "election technology."

With the definition, it's reasonable to say that "Election Technology 'R' Us" is what the TrustTheVote Project is about, and why the OSDV Foundation exists to support it.  And about intellectual property protection?   I think we're clear on the pros and cons:

  • CON: trade secrets and software licenses that protect them. These create "black box" for-profit election technology that seems to decrease rather than increase public confidence.
  • PRO: open source software licenses. These enable government organizations to [A] adopt election technology with a well-defined legal framework, without which the adoption cannot happen; and [B] enjoy the fruits of the perpetual harvest made possible by virtue of open source efforts.
  • PRO: patent applications on election technology.  As in today's news, the USPTO can make clear which aspects of voting technology can or can't be protected with patents that could inhibit election officials from using the technology, or require them to pay licensing fees.
  • ZERO SUM: granted patents on techniques or business processes (used in election administration or the conduct of elections) in favor of for-profit companies.  Downside: can increase costs of election technology adoption by governments. Upside: if the companies do have something innovative, they are entitled to I.P. protection, and it may motivate investment in innovation.  Downside: we haven't actually seen much innovation by voting system product vendors, or contract software development organizations used by election administration organizations.
  • PRO: granted patents to non-profit organizations.  To the extent that there are innovations that non-profits come up with, patents can be used to protect the innovations so that for-profits can't nab the I.P., and charge license fees back to governments running open source software that embodies the innovations.

All that stated, the practical upshot as of today seems to be this: there isn't much innovation in election technology, and that may be why for-profits try to use trade secret protection rather than patents.

That underscores our practical view at the TrustTheVote Project: a lot of election technology isn't actually hard, but rather simply detailed and burdensome to get right -- a burden beyond the scope of all but a few do-it-ourself elections offices' I.T. groups.

That's why our "Election Technology 'R' Us" role is to understand what the real election officials actually need, and then to (please pardon me) "Git 'er done."

What we're "getting done" is the derivation of blue prints and reference implementations of an elections technology framework that can be adopted, adapted, and deployed by any jurisdiction with common open data formats, processes, and verification and accountability loops designed-in from the get-go.  This derivation is based on the collective input of elections experts nationwide, from every jurisdiction and every political process point of view.  And the real beauty: whereas no single jurisdiction could possibly ever afford (in terms of resources, time or money) to achieve this on their own, by virtue of the collective effort, they can because everyone benefits -- not just from the initial outcomes, but from the on-going improvements and innovations contributed by all.

We believe (and so do the many who support this effort) that the public benefit is obvious and enormous: from every citizen who deserve their ballots counted as cast, to every local election official who must have an elections management service layer with complete fault tolerance in a transparent, secure, and verifiable manner.

From what we've been told, this certainly lifts a load of responsibility off the shoulders of elections officials and allows it to be more comfortably distributed.  But what's more, regardless of how our efforts may lighten their burden, the enlightenment that comes from this clearinghouse effect is of enormous benefit to everyone by itself.

So, at the end of the day, what we all benefit from is a way forward for publicly owned critical democracy infrastructure.  That is, that "thing" in our process of democracy that causes long lines and insecurities, which the President noted we need to fix during his victory speech tonight.  Sure, its about a lot of process.  But where there will inevitably be technology involved, well that would be the TrustTheVote Project.

GAM|out

Comment

Comment

NJ Election Officials, Displaced Voters, Email Ballots, and more

There's plenty of activity in the NY/NJ area reacting to voters' difficulties because of Super-Storm Sandy, including being displaced from their homes or otherwise unable to get to polling places. As always, the role of technology captured my attention. But first, the more important points. Some displaced people are having trouble even finding a place to shelter temporarily, so extra special kudos to those that manage to vote, whatever the method of voting they use. Likewise, extra praise for NJ and NY election officials putting in the extra extra-hours to be available to voters in advance of the election, inform them about changed polling places, and equip them to get the word out to their neighbors. The amount of effort on both sides is a great indicator of how seriously people take this most important form of civic activity.

Next, the technology, and then the policy. On the technology front, Gov. Christie of NJ announced an emergency (and I hope temporary) form of voting for displaced voters: sending an absentee ballot via email. That's a bad idea in the best of circumstances -- for several reasons including the vulnerability of the email data in transit and at rest, and the control of the e-ballot by people who are not election officials -- and these are not the best of circumstances. For example, I doubt that in every county elections office in NJ, somebody has a complete list of the people with access to the email server and the ability to view and modify data on it.  But while you can see that Christie's heart in the right place, there are several issues beyond these, as described in a NJ news report here.

And this is only one of the emergency measures. In both NJ and NY people can cast a provisional ballot at any polling location -- see NJ's announcement here, and if you have the similar one for NY, please provide it as a comment!

Finally, on the policy side, it's not even clear what these ballots represent, and that's the policy problem. My legal and policy colleagues here at TTV, and in the legal side of the election integrity community, certainly know more, but I don't! Are the provisional ballots cast under these emergency rules required to be processed exactly the same as non-emergency provisional ballots? Are the e-mailed ballots provisional ballots or absentee ballots? If so, what serves as the affadavit? Do the email ballots have to be followed up with the paper hardcopy that the voter scanned and faxed? (The NJ Lt. Gov. office has issued some seemingly inconsistent statements on that.) If not, what happens in a recount? If so, why email the ballot at all, rather than just emailing a "my ballot is coming soon" message?

I could go on and on, but I think you get the idea. The general issue is that in the case of a close election (most likely a local election, but state house or congress, you never know!) there will be some of these not-exactly-your-regular ballots involved, and the potential for real disputes -- starting with concerns over dis-enfranchisement of people mis-informed about how to do a "displaced vote", and going all the way to dispute about whether counted ballots should have been counted, and whether uncounted ballots should be counted. But let's hope that it does not in fact get that messy in NY and NJ, and that every voter is able to make the extra efforts for their ballot to be cast and counted.

-- EJS

Comment

Comment

Possible Ohio Election Meltdown -- from a Clunky Database?

Tomorrow is Election Day 2012, and with many people experiencing pre-election angst, perhaps now is not the time to start telling our patient readers what the heck we've been doing in technology land for the last several months. Right now, we're in a bit of a breather, as election officials and other partners have been focusing solely on the final slog to election day, and readying for a couple intense weeks of work post-election. So instead, I'll be focusing on sharing with you a technology spin on current election news, and get around to our own accomplishments a little later. The news of the moment I want to share is this:  there is a good chance that Ohio's state and federal election results won't be available on Election Night or even the next day, and root of the problem is technology. To continue the tree analogy, the trunk is process, the branches are people, the leaves are provisional ballots, and the possible storm blowing through the tree is litigation about the ballots. The technology root is balky; the trunk process of finding absentee voters is tricky; election officials didn't do the process correctly; thousands of absentee voters will instead vote provisionally; the delay in counting those ballots can create the opportunity for a storm.

As a result, there is Florida-2000-style election meltdown looming in Ohio. Due to problems with Ohio's voter records database, perhaps as many as 100,000 Ohioans will vote on provisional ballots, a huge number when you consider that every one of them requires human decisions about whether to count it. And those decisions must be monitored and recorded, because if the decision is "yes" then it is irrevocable. Once a provisional ballot is separated from the voter's affidavit (explaining why they are entitled to vote even if the poll worker didn't think so) and counted, then you can't un-count it. Likewise, the "no" decisions lead to a pile of uncounted ballots, which can be the focus of litigation.

"How does a voter records system lead to this?" you might well ask, thinking of a voter registration system that mainly handles new voter registration (creating a new voter record), and updates or re-registration (e.g. changing the address in an existing voter record). Technology glitches could disenfranchise a voter, but create an election integrity meltdown? Yes - and that's because we're not talking about a voter registration system or database, but rather a voter records system that local election officials use for many purposes. And in this case, it's the hinge for absentee voting.

Here's how it works. An Ohio voter requests absentee status via a voter registration form, either a new registration or an update of an existing record. If that request is granted, the voter record's absentee status is updated. Later on, 50-something days before the election, local election officials use the system to find all the people in their county who will be voting absentee, and send each of them their absentee ballot via U.S. Post. But what if the "find all the absentee voters" part doesn't work? Then some people don't get an absentee ballot, and many of them will try to vote in person, and hit a snag, because:

  1. The find-the-absentee-voters part is tricky to do with the voter-records system, and many county officials were not given the correct instructions for the lookup. As result, many absentee voters didn't get an absentee ballot.
  2. What does seem to work OK is preparing the pollbooks for in-person voting, where the poll books indicate for each voter whether they have absentee status. As a result, you get voters with absentee status -- but no absentee ballot -- showing up on Election Day, and being told that they already have a ballot.

Then what? Well, if a voter is persistent and/or the poll workers well-trained and not swamped, then a poll worker will help the voter understand how to vote provisionally -- mark a ballot that does not go into the ballot box, but rather into an envelope with the voter's info. After the election, all these provisional ballot envelopes go to county election HQ, where election officials have to process each envelope, to decide whether the ballot inside should be counted.

Now, the 100,000 estimate kicks in. In a small county with thousands of provisional ballots, or a large county with tens of thousands, the provisional ballot processing can easily go all night and into the next day, because it can't even begin until all absentee ballots have been centrally counted, folded into the results from precincts, and tabulated as preliminary election results. Now suppose that statewide, the margin of victory for the presidential election is only tens of thousands of votes, and statewide there are 100,000+ provisional ballots that are yet to be counted?

In that case, provisional ballot processing is going to receive a lot of scrutiny, and every one of those non-counted ballots is going to be subjected to the type of controversy we saw in Minnesota 4 years ago with the Franken-Coleman senate contest that took weeks to resolve. And this is the situation that has many Ohio election officials (and me) praying that whatever the election result is, the margin is wider than the number of provisional ballots.

This situation is rooted in a voter records system that's too complicated and clunky for harried, under-funded, under-staffed, hard-working election officials to use reliably. So if you doubted that ordinary information technology could create a possible election meltdown just as easily as flaky proprietary voting systems, well, now you know. And that's just one reason why we've been hard at work on registration-related technology -- try to help create the public benefit of an election that is and can be seen to have been administered correctly, before the ballot counting even begins.

Keep those fingers crossed …

-- EJS

Comment

Comment

Do Trade Secrets Hinder Verifiable Elections? (Duh)

Slate Magazine posted an article this week, which in sum and substance suggests that trade secret law makes it impossible to independently verify that voting machines are working correctly.  In a short, we say, "Really, and is this a recent revelation?" Of course, those who have followed the TrustTheVote Project know that we've been suggesting this in so many words for years.  I appreciate that author David Levine refers to elections technology as "critical infrastructure."  We've been suggesting the concept of "critical democracy infrastructure" for years.

To be sure, I'm gratified to see this article appear, particularly as we head to what appears to be the closest presidential election since 2000.  The article is totally worth a read, but here is an excerpt worth highlighting from Levine's essay:

The risk of the theft (known in trade secret parlance as misappropriation) of trade secrets—generally defined as information that derives economic value from not being known by competitors, like the formula for Coca-Cola—is a serious issue. But should the “special sauce” found in voting machines really be treated the same way as Coca-Cola’s recipe? Do we want the source code that tells the machine how to register, count, and tabulate votes to be a trade secret such that the public cannot verify that an election has been conducted accurately and fairly without resorting to (ironically) paper verification? Can we trust the private vendors when they assure us that the votes will be assigned to the right candidate and won’t be double-counted or simply disappear, and that the machines can’t be hacked?

Well, we all know (as he concludes) that all of the above have either been demonstrated to be a risk or have actually transpired.  The challenge is that the otherwise legitimate use of trade secret law ensures that the public has no way to independently verify that voting machinery is properly functioning, as was discussed in this Scientific American article from last January (also cited by Levine.)

Of course, what Levine is apparently not aware of (probably our bad) is that there is an alternative approach on the horizon,  regardless of whether the government ever determines a way to "change the rules" for commercial vendors of proprietary voting technology with regard to ensuring independent verifiability.

As a recovering IP lawyer, I'll add one more thing we've discussed within the TrustTheVote Project and the Foundation for years: this is a reason that patents -- including business method patents -- are arguably helpful.  Patents are about disclosure and publication, trade secrets are, be definition, not.  Of course, to be sure, a patent alone would not be sufficient because within the intricacies of a patent prosecution there is an allowance that only requires partial disclosure of software source code.  Of course, "partial disclosure" must meet a test of sufficiency for one "reasonably skilled in the art" to "independently produce the subject matter of the invention."  And therein lies the wonderful mushy grounds on which to argue a host of issues if put to the test.  But ironically, the intention of partial code disclosure is to protect trade secrets while still facilitating a patent prosecution.

That aside, I also note that in the face of all the nonsense floating about in the blogosphere and other mainstream media whether about charges of Romney's ownership interest in voting machinery companies being a pathway to steal an election or suggesting a Soros-Spanish based voting technology company's conspiracy to deliver tampered tallies, Levine's article is a breath of fresh air deserving the attention ridiculously lavished on these latest urban myths.

Strap in... T-12 days.  I fear a nail biter from all view points.

GAM|out

Comment