Viewing entries in


David Plouffe’s View of the Future of Voting — We Agree and Disagree

David Plouffe, President Obama’s top political and campaign strategist and the mastermind behind the winning 2008 and 2012 campaigns, wrote a forward-looking op-ed [paywall] in the Wall Street Journal recently about the politics of the future and how they might look.

He touched on how technology will continue to change the way campaigns are conducted – more use of mobile devices, even holograms, and more micro-targeting at individuals. But he also mentioned how people might cast their votes in the future, and that is what caught our eye here at the TrustTheVote Project.  There is a considerable chasm to cross between vision and reality.



PCEA Report Finally Out: The Real Opportunity for Innovation Inside

PCEACoverThis week the PCEA finally released its long-awaited report to the President.  Its loaded with good recommendations.  Over the next several days or posts we'll give you our take on some of them.  For the moment, we want to call your attention to a couple of under-pinning elements now that its done.

The Resource Behind the Resources

Early in the formation of what initially was referred to as the "Bauer-Ginsberg Commission" we were asked to visit the co-chairs in Washington D.C. to chat about technology experts and resources.  We have a Board member who knows them both and when asked we were honored to respond.

Early on we advised the Co-Chairs that their research would be incomplete without speaking with several election technology experts, and of course they agreed.  The question was how to create a means to do so and not bog down the progress governed by layers of necessary administrative regulations.

I take a paragraph here to observe that I was very impressed in our initial meeting with Bob Bauer and Ben Ginsberg.  Despite being polar political opposites they demonstrated how Washington should work: they were respectful, collegial, sought compromise to advance the common agenda and seemed to be intent on checking politics at the door in order to get work done.  It was refreshing and restored my faith that somewhere in the District there remains a potential for government to actually work for the people.  I digress.

We advised them that looking to the CalTech-MIT Voting Project would definitely be one resource they could benefit from having.

We offered our own organization, but with our tax exempt status still pending, it would be difficult politically and otherwise to rely on us much in a visible manner.

So the Chairs asked us if we could pull together a list -- not an official subcommittee mind you, but a list of the top "go to" minds in the elections technology domain.  We agreed and began a several week process of vetting a list that needed to be winnowed down to about 20 for manageability  These experts would be brought in individually as desired, or collectively  -- it was to be figured out later which would be most administratively expedient.  Several of our readers, supporters, and those who know us were aware of this confidential effort.  The challenge was lack of time to run the entire process of public recruiting and selection.  So, they asked us to help expedite that, having determined we could gather the best in short order.

And that was fine because anyone was entitled to contact the Commission, submit letters and comments and come testify or speak at the several public hearings to be held.

So we did that.  And several of that group were in fact utilized.  Not everyone though, and that was kind of disappointing, but a function of the timing constraints.

The next major resource we advised they had to include besides CalTech-MIT and a tech advisory group was Rock The Vote.  And that was because (notwithstanding they being a technology partner of ours) Rock The Vote has its ear to the rails of new and young voters starting with their registration experience and initial opportunity to cast their ballot.

Finally we noted that there were a couple of other resources they really could not afford to over-look including the Verified Voting Foundation, and L.A. County's VSAP Project and Travis County's StarVote Project.

The outcome of all of that brings me to the meat of this post about the PCEA Report and our real contribution.  Sure, we had some behind the scenes involvement as I describe above.  No big deal.  We hope it helped.

The Real Opportunity for Innovation

But the real opportunity to contribute came in the creation of the PCEA Web Site and its resource toolkit pages.

On that site, the PCEA took our advice and chose to utilize Rock The Vote's open source voter registration tools and specifically the foundational elements the TrustTheVote Project has built for a States' Voter Information Services Portal.

Together, Rock The Vote and the TrustTheVote Project are able to showcase the open source software that any State can adopt, adapt, and deploy--for free (at least the adoption part) and without having to reinvent the wheel by paying for a ground-up custom build of their own online voter registration and information services portal.

We submit that this resource on their PCEA web site represents an important ingredient to injecting innovation into a stagnant technology environment of today's elections and voting systems world.

For the first time, there is production-ready open source software available for an important part of an elections official's administrative responsibilities that can lower costs, accelerate deployment and catalyze innovation.

To be sure, its only a start -- its lower hanging fruit of an election technology platform that doesn't require any sort of certification. With our exempt status in place, and lots of things happening we'll soon share, there is more, much more, to come.  But this is a start.

There is a 112 pages of goodness in the PCEA report.  And there are some elements in there that deserve further discussion.  But we humbly assert its the availability of some open source software on their resource web site that we think represents a quiet breakthrough in elections technology innovation.

The news has been considerable.  So, yep, we admit it.  We're oozing pride today. And we owe it to your continued support of our cause. Thank you!

GAM | out



VoteStream: Data-Wrangling of Election Results Data

If you've read some of the ongoing thread about our VoteStream effort, it's been a lot about data and standards. Today is more of the same, but first with a nod that the software development is going fine, as well. We've come up with a preliminary data model, gotten real results data from Ramsey County, Minnesota, and developed most of the key features in the VoteStream prototype, using the TrustTheVote Project's Election Results Reporting Platform. I'll have plenty to say about the data-wrangling as we move through several different counties' data. But today I want to focus on a key structuring principle that works both for data and for the work that real local election officials (LEOS) do, before an election, during election night, and thereafter.

Put simply, the basic structuring principle is that the election definition comes first, and the election results come later and refer to the election definition. This principle matches the work that LEOs do, using their election management system to define each contest in an upcoming election, define each candidate, and do on. The result of that work is a data set that both serves as an election definition, and also provides the context for the election by defining the jurisdiction in which the election will be held. The jurisdiction is typically a set of electoral districts (e.g. a congressional district, or a city council seat), and a county divided into precincts, each of which votes on a specific set of contests in the election.

Our shorthand term for this dataset is JEDI (jurisdiction election data interchange), which is all the data about an election that an independent system would need to know. Most current voting system products have an Election Management System (EMS) product that can produce a JEDI in a proprietary format, for use in reporting, or ballot counting devices. Several states and localities have already adopted the VIP standard for publishing a similar set of information.

We've adopted the VIP format as the standard that that we'll be using on the TrustTheVote Project. And we're developing a few modest extensions to it, that are needed to represent a full JEDI that meets the needs of VoteStream, or really any system that consumes and displays election results. All extensions are optional and backwards compatible, and we'll be submitting them as suggestions, when we think we got a full set. So far, it's pretty basic: the inclusion of geographic data that describes a precinct's boundaries; a use of existing meta-data to note whether a district is a federal, state, or local district.

So far, this is working well, and we expect to be able to construct a VIP-standard JEDI for each county in our VoteStream project, based on the extant source data that we have. The next step, which may be a bit more hairy, is a similar standard for election results with the detailed information that we want to present via VoteStream.

-- EJS

PS: If you want to look at a small artificial JEDI, it's right here: Arden County, a fictional county that has just 3 precincts, about a dozen districts, and Nov/2012 election. It's short enough that you can page through it and get a feel for what kinds of data are required.




Election Results Reporting - Assumptions About Standards and Converters (concluded)

Last time, I explained how our VoteStream work depends on the 3rd of 3 assumptions: loosely, that there might be a good way to get election results data (and other related data) out of their current hiding places, and into some useful software, connected by an election data standard that encompasses results data. But what are we actually doing about it? Answer: we are building prototypes of that connection, and the lynchpin is an election data standard that can express everything about the information that VoteStream needs. We've found that the VIP format is an existing, widely adopted standard that provides a good starting point. More details on that later, but for now the key words are "converters" and "connectors". We're developing technology that proves the concept that anyone with basic data modeling and software development skills can create a connector, or data converter, that transforms election data (including but most certainly not limited to vote counts) from one of a variety of existing formats, to the format of the election data standard.

And this is the central concept to prove -- because as we've been saying in various ways for some time, the data exists but is locked up in a variety of legacy and/or proprietary formats. These existing formats differ from one another quite a bit, and contain varying amounts of information beyond basic vote counts. There is good reason to be skeptical, to suppose that is a hard problem to take these different shapes and sizes of square data pegs (and pentagonal, octahedral, and many other shaped pegs!) and put them in a single round hole.

But what we're learning -- and the jury is still out, promising as our experience is so far -- that all these existing data sets have basically similar elements, that correspond to a single standard, and that it's not hard to develop prototype software that uses those correspondence to convert to a single format. We'll get a better understanding of the tricky bits, as we go along making 3 or 4 prototype converters.

Much of this feasibility rests on a structuring principle that we've adopted, which runs parallel to the existing data standard that we've adopted. Much more on that principle, the standard, its evolution, and so on … yet to come. As we get more experience with data-wrangling and converter-creation, there will certainly be a lot more to say.

-- EJS



Election Results Reporting - Assumptions About Standards and Converters

It's time to finish -- in two parts -- the long-ish explanation of the assumptions behind our current "VoteStream" prototype stage of the TrustTheVote Project's Election Result Reporting Platform (ENRS) project. As I said before, it is an exercise in validating some key assumptions, and discovering their limits. Previously, I've described our assumptions about election results data, and the software that can present it. Today, I'll explain the 3rd of three basic assumptions, which in a nutshell is this:

  • If the data has the characteristics that we assumed, and
  • if the software (to present that data) is as feasible and useful as we assumed;
  • then there is a method for getting the data from its source to the reporting software, and
  • that method is practical for real-world elections organization, scalable, and feasible to be adopted widely.

So, where are we today? Well, as previous postings have described, we made a good start on validating the first 2 assumptions during the previous design phase. And since starting this prototype phase, we've improved the designs and put them into action. So far so good: the data is richer than we assumed; the software is actually significantly more flexible than before, and effectively presents the data. We're pretty confident that our assumptions were valid on those two points.

But where did the 2012 election results data come from, and how did it get into the ENRS prototype? Invented elections, or small transcribed subsets of real results, were fine for design; but in this phase it needs to be real data, complete data, from real election officials, used in a regular and repeated way. That's the kind of connection between data source and ENRS software that we've been assuming.

Having stated this third of three assumptions, the next point is about what we're doing to prove that assumption, and assess it limits. That will be part two of two, of this last segment of my account of our assumptions and progress to date.

-- EJS




Election Results Reporting - Assumptions About Software (concluded)

Today, I'll be concluding my description of one area of assumptions in our Election Night Reporting System project -- our assumptions about software. In my last post, I said that our assumptions about software were based on two things: our assumptions about election results data (which I described previously), and the results of the previous, design-centric phase of our ENRS work. Those results consist of two seemingly disparate parts:

  1. the UX design itself, that enables people to ask ENRS questions, and
  2. a web service interface definition, that enable to software to ask ENRS questions.

In case (1), the answer is web pages delivered by a web app. In case (2) the answers are data delivered via an application programming interface (API). 

Exhibit A is our ENRS design website which shows a preliminary UX design for a map-based visualization and navigation of the election results data for the November 2010 election in Travis County, Texas. The basic idea was to present a modest but useful variety way to slice and dice the data, that would be meaningful to ordinary voters and observers of elections. The options include slicing the data at the county level, or the individual precinct level, or in-between, and to filter by one of various different kinds of election results or contests or referenda. Though preliminary, the UX design well received, and it's the basis for current work to do a more complete UX that also provides features for power users (data-heads) without impacting the view of ordinary observers. 

Exhibit B is the application programming interface (API), or for now just one example of it:

That does not look like a very exciting web page (click it now if you don't believe me!), and a full answer of "what's an API" can wait for another day.


But the point here is that the URL is a way for software to request a very specific slice through a large set of data, and get it in a software-centric digestable way. The URL (which you can see above in the address bar) is the question, and the answer is what you above as the page view. Now, imagine something like your favorite NBA or NFL scoreboard app for your phone, periodically getting updates on how your favorite candidate is doing, and alerting you in a similar way that you get alerts about your favorite sports team. That app asks questions of ENRS, and gets answers, in exactly the way you see above, but of course it is all "under the hood" of the app's user interface.

So, finally, we can re-state the software assumption of our ENRS project:

  • if one can get sufficiently rich election data, unlocked from the source, in a standard format,
  • then one can feasibly develop a lightweight modern cloud-oriented web app, including a web service, that enables election officials to both:
    • help ordinary people understand complex election results data, and
    • help independent software navigate that data, and present it to the public in many ways, far beyond the responsibilities of election officials.

We're trying to prove that assumption, by developing the software -- in our usual open source methodology of course -- in a way that (we hope) provides a model for any tech organization to similarly leverage the same data formats and APIs.

-- EJS



If it Walks Like a Duck, and Quacks Like a Duck...

So in the midst of participating in an amazing conference at MIT's Media Lab produced in conjunction with the Knight Foundation (thanks Chris Barr you rock!) today, we learned of additional conduct by the IRS with regard to their Exempt Division's handling of 1023 filings (for 501.c.3 determination).  In particular, this revelation appeared in an NY Times article today:

But groups with no political inclinations were also examined. “Open source software” organizations seeking nonprofit status are usually for-profit business or for-profit support technicians of the software,” a lookout list warns. “If you see a case, elevate it to your manager.”

Please let us go on record once again, here and now:

  1. The OSDV Foundation's 1023 application (filed in February 2007) states with sworn affidavits under penalties of perjury, that our charter Article II(A), defines the OSDV Foundation as an organization organized as a nonprofit public benefit corporation, not for the private gain of any person, and it is organized under California Nonprofit Public Benefit Corporation Law for public and charitable purposes, with by-laws consistent with that charter.
  2. We are not a "for-profit business," rather we are a non-profit project, conducting our activities as such, and never intend to be a commercial operation, or convert into a commercial entity.
  3. We are genuinely seeking, through (continued) philanthropic support of larger grantor organizations, as well as through the generous support of individual citizens, to provide education, research, and reference development of blueprints for trustworthy elections technology, on an open source basis for the benefit of elections jurisdictions nationwide because the current commercial market place is woefully falling short of that capability.
  4. We are willing to ensure our reference designs and implementations, some reduced to software source code, remain up-to-date with as-then-published Federal and/or State-specific criteria or specifications, but we will not do so as a commercial venture.
  5. We reiterate here that we are not "for-profit support technicians" for any software that results from the efforts of this California public benefits non-profit corporation.

As you might imagine, these statements have been backed by over 6-years of considerable supporting documentation, sworn to be accurate and correct to the best of our knowledge, under penalties of perjury and backed by sworn, signed affidavits.

And to be sure, we take our sworn signed statements and veracity very seriously.

We remain hopeful that the IRS will ultimately acknowledge these points and grant us our exempt determination.



TrustTheVote Project Earns Backing from Knight Foundation Prototype Fund

Greetings All- Apologies for the extended radio silence.  I promise to one day be able to explain in some detail why that occasionally occurs, but for now I have to remain, um, silent on that point.  However, I am very happy to share with you that one of the additional reasons for being distracted from this forum has been work that resulted in today's announcement.

Indeed, the OSDV Foundation's TrustTheVote Project has earned a substantial grant from the Knight Foundation's Prototype Fund.  Such was a favorable consequence of being a near brides-maid in their Knight Foundation News Challenge, which we competed for earlier this Spring.  While we did not make the final cut for the major grant program, the Knight Foundation was sufficiently excited about our proposal for open data standards based election night reporting services that they awarded us a Prototype Grant.

You can learn more about our project here.  In a sentence, let me state the metes and bounds of this project.  We will share a little about what, how, why, and when in subsequent posts.

In a sentence our project is: Building an open source election night results reporting service tying directly into local and State elections data feeds (for which the TrustTheVote Project has already helped establish the required standards), with a public-facing web app, and a robust API to enable anyone to access reporting data for further analysis and presentation.

Some Details So, essentially the Knight Foundation's Prototype Fund is designed to provide a "seed grant" to enable a prototype or "early Alpha" of an app, service, or system that advances the causes of civic media and citizen engagement with news and information.  Our Election Night Reporting Service is a perfect fit.  And this 6-month project is intended to finish the development and deployment stages for an evaluation/test run on the system.  I need to point out that it will definitely be a prototype and will not include some necessary components to put the system into production, but enough framework and scaffolding to conduct a robust "alpha test for which 3 or 4 elections jurisdictions have agreed to participate.

We will announce those jurisdictions soon.  The test will utilize an early release of the Results Scoreboard -- a web-based app/service to display elections results.  The alpha will also deliver an API and data feed service.

In our next post, we will discuss some details about the project in terms of the what, how, and why. But let me say quickly that the name has some legacy meaning, because its not just about election night -- its about election reporting any time.  So, stay tuned!

I'd like to thank the tremendous support of OSDVF Board and TTV Project Advisers who worked closely with us on the Knight News Challenge application and for the work of the Core team and our CTO John Sebes on hammering out sufficient details originating in our work with Travis County, TX a couple of years ago.  Without their contributions -- many in the 11th hour and into the pre-dawn hours last March --  this would not have been possible.




For (Digital) Poll Books -- Custody Matters!

Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”).  For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote. For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability.  So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.

OSDV_pollbook_100709-1We've been hearing much lately about State and local election officials' appetite (or infatuation) for digital poll books.  We've been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book.  But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood.  And we think the gap shows why physical custody is so important—especially so for digital poll books.

The first two obvious purposes of a poll book are to [1] check in a voter as a prerequisite to obtaining a ballot, and [2] to prevent a voter from having a second go at checking-in and obtaining a ballot.  That's fine for meeting the "Eligibility" and "Non-duplication" requirements for in-person voting.

But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood.  In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting.  Bear in mind, those "delayed-cast" ballots can't be evaluated until after the post-election poll-book-intake process is complete.

To explain why, let's consider one fairly typical approach to absentee evaluation.  The poll book intake process results in an update to the voter record of every voter who voted in person.  Then, the voter record system is used as one part of absentee and provisional ballot processing.  Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record.  If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.

So far, that's a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).

The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added).  With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.

With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that's managed by the tablet, and that represents who voted) can be separated, although it should not.

Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message.  It's essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place.  Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected.  Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.

That's why digital poll books must be carefully designed for use by election staff in a way that doesn't endanger the integrity of the data.  And this is an example of the devil in the details that's so common for innovative election technology.

Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.

Marvelous simplicity, right?  Oh hec, I'm sure some grant-funded project could build that right away.  But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on.  You wouldn't put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.

A Side Note:  This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts.  They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today.  Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web.  And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices.  But there is far more to it, than a sexy interface with a 4G connection.  Oops, I digress to a tangential gripe.

This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in [1] the data integration (to and from the voter record system); [2] the data management (to and from each individual digital poll book), and [3] the data integrity (maintaining the same control present in paper poll books).

Without a doubt, the voter's user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we're gathering plenty of requirements and feedback based on our current work.  But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.

Thoughts? Your ball (er, ballot?) GAM | out



Arizona Capital Times Editorial on Election Data - Right?

I can't recall a newspaper editorial on election stuff that I've agreed more with than the Arizona Capital Times "Improved election data would mean a better informed electorate" ...though with a title like you can imagine I'd be a fan.  But before I pick at just one part they got wrong, let me pick 3 of the 24 sentences worth quoteing:

Sean Greene, the election initiatives research manager at Pew Charitable Trusts, argues that structuring election data in a uniform manner should be just as important as other government data, such as statistics in public health. ... The same attitude should be applied to election data. Ultimately, the ways that counties handle their election data affects the ability of journalists, researchers, lawmakers — or anyone else for that matter — to efficiently monitor our state’s election process.


Now let me paraphrase the part that I think is wrong, where a there is a much brighter future.

Arizona Secretary of State Ken Bennett seems to be in favor of changes to (as we say here) "get the data out"  for transparency and open-gov goodness (agreed) but pointed out that this is a county matter (true).  Yet, based on Bennet's remarks, the authors expressed concerns over costs of the changes: "Counties are entitled to a level of autonomy and costs associated with changing any system can become prohibitive, especially if improvements are only being encouraged." (Our emphasis in underlining added.)

Here is the very good news: those costs are small, at least measured in dollars or in hours of staff time.  What is not required is changes to any existing software or election management system -- which would be costly.  What is required is some really boring basic scripting and querying to make a funky little tool that crunches over the existing database of election records (every county has one) and spits out the data in a public data format.  That's just conversion.  What is really not required is for any election office to create new systems for mining that data.  Rather, just get the data out in some format that others can grab and mine.

But don't take it from me. I checked with a couple very senior election technologists from two ends of the country that I rreached out to of after reading the article.  This sort of thing is really not a lot of work, and indeed election IT teams get called on to make various sorts of data extracts frequently -- and sometimes have the time to make them.

The constraint is not the dollar cost. It is the opportunity costs, which are relative costs. For a large county, this is a non-issue.  For a small county with maybe a couple technical staff, the capability is there, but just not the time; they barely have enough time for their many existing responsibilities.  Only 10% of one person's work week is actually 5% of an entire week - and probably a higher proportion of a week's output.

This is where standards come into play.

If there were an agreed-to nationwide standard for what this data should look like, some of the larger counties would use it, and get a couple people to spend the hours to build such an export tool that works right.  And if the scripts and queries of that tool were published to share with other elections organizations -- especially those that use the same voting system product, as many of the counties in Arizona do -- others could benefit from the prior work with lots less effort.

But a standard is needed.

Even Better News: NIST (National Institute of Standards and Technology) is working on one for exactly that!  And people are waiting for it to be finalized.  Our TrustTheVote Project team is waiting. I'm sure that some of the vendors have teams that are waiting.  A couple large election office's technical teams are waiting.  And I am sure that public interest organizations -- that share the sentiments of the editorial's authors -- have been waiting for a long time for the election technology people to get the data out!

As I said, we are waiting too. When there is a standard, and the early adopters are getting the data out, then we will be collaborating with them on what to do with the data once it is out.  For starters, it will be integration with the existing TTV Analytics (big picture here), but even more interesting -- TTV Election Night Reporting System, which takes that data and visualizes it for members of the public, like this:

and like this:

We got work to do!  But getting a bit sore from crouching in the starting blocks :-) More soon!

-- EJS



Jeffrey Valjean Cook, 1956 - 2013

We had some very sad news this week in the TrustTheVote Project -- the sudden and unexpected death of a key contributor: Jeffrey Valjean Cook. We all feel that among the ways to mark his passing, it's important to recall the many contributions he made. But for myself, I have to do so in a personal way, in the context of the 25 years that I've known and worked with him. Jeff was an example of many things to me – cook, artist, craftsman, surfer, father, writer, techie, election geek, colleague, friend – so this remembrance is not short. One of the many things that I learned from Jeff, in some ways the most important, and certainly what I'll miss the most … food and hospitality. Jeff was a great cook, and a great host. I'll miss working with him, but I'll really miss working with him in his Venice beach house, fueled by his creations. When I met Jeff, he and Leo Marcus at Aerospace Corp. were extremely helpful as colleagues and mentors in some work that I was starting then, involving formal methods and information security. Good work stuff came out of that first meeting in El Segundo, but also the trading of some good recipes. We established a collegial relationship in the context of collaboration and hospitality outside work. I think that that beginning really was the foundation that enabled Jeff and I (and Jeff and many others too) to enjoying working together off and on for many years, with the ups and downs of the tech sector, the hassles, the hurry-up-and-waits, the standards committee grind, Federal procurement fun, and the dot-com boom-bust that took our company from a specialty R&D company to a publicly held product company to part of the then-third-largest software company in the world. Fast forward 25 years, and I'm looking at photos of a Jeff Cook feast on the smartphone of a colleague attending a conference after just having been in LA visiting with Jeff, talking about his latest creations.

Another time I learned from Jeff was way back in the day when Privacy Enhanced Mail (PEM, which was open before there was such a thing as open source) was pretty much it for any form of application security service (no SSL or IPsec or PGP then) and Public Key Infrastructure was just being invented. Oh, and no Internet then BTW. When I joined Trusted Information Systems (TIS), I had a steep learning curve to catch up with Jeff and with Dave Balenson and eventually a whole group of application security inventors at TIS. I learned from Jeff's insistence on the proper use of formalisms, careful syntax for data exchange, carefully defined semantics, and extreme clarity for where the semantics was actually absent or wobbly, and we'd just have to forge ahead anyhow. From such a laid-back surfer dude, the formalism was ironic, and also the HitchHiker's Guide philosophers' keyword "demarcation! we demand rigidly defined areas of doubt and uncertainty!" for carefully noted semantic fuzzyness. It was all the more gratifying when years later at OSDV, Jeff insisted on using BNF to clearly define our specifications, and put up with me when we ran up to lines of demarcation where we didn't really know yet what our real election officials needed – but were going to learn by making some stuff, showing it to them, and finding out what else was needed.

Another thing I learned from Jeff, though I know I haven't put into practice as well and as broadly as he did, is hard work. Jeff was really careful not to commit to a batch of work and the resulting work product, until we got really clear on what was required, up to but not past some good lines of demarcation. But once we got there, he was a demon worker, coding like crazy to get it done, making an over-the-top, beyond-complete test case generator, picking up new programming languages and development tools, pattern-matching from examples until the example blew up, and leveraging the experience of everyone around him to put the pieces back together. That meant that things proceeded in fits and starts, but that was OK in TIS days -- we were always multi-tasking on multiple funded projects or internal R&D -- and it was OK in OSDV days because Jeff also did a lot of work as an artist and craftsman; he made great stuff from atoms even better than he made stuff from bits.

Jeff and I worked closely on some important projects in the last few years. It started about 3 and bit years ago, when I was in LAX, heading home after some OSDV meetings about election tech work, pondering who I could ask to join the TrustTheVote team for some additional work that had come up. Light bulb moment, the answer was: Jeff Cook. I hadn't been in touch for a while, but I called him right then. I was pleased that the public interest part of the work grabbed him immediately, and that he even considered adding some new commitments to his plate. Not too long later, though, I was back at LAX with Jeff picking me up to go to some meetings with election officials -- and a fine lunch at a wacky little Peruvian place and a chance to see some of his art that happened to be in his car. That was a good start for Jeff to start the process of spinning up on election tech stuff, and being assimilated into the cozy crazy world of election geeks. Not too many months later, we were back-and-forthing on which kinds of data interchange could provide most technology-enabled transparency on the process of tabulating votes, taking into account the vagaries of precinct splits and residual votes.

Like all of us Jeff worked on a volunteer basis at first. But when one of our projects popped up an unexpected deliverable, Jeff was there immediately when I needed someone who was very strong on applied crypto, an experience developer, and savvy on current open source platforms. He worked with colleagues at Red Hat to develop a crypto workstation, a system dedicated to helping election officials carry out some mission-critical duties around crypto key management. This turned out to be the technical basis for the later TrustTheVote Browser Appliance, the first (or maybe tied for first) technology for creating a system that would do all and only a specific set of web application client functions, thereby sidestepping an enormous amount of client security and Internet security and privacy issues faced by election officials (or anyone!) using regular Internet-connected general-purpose PCs as workstations. I get requests for help in this area every couple of months from election officials. And sad to say, we were gearing up for some DHS funding to take a next step in Jeff's work to make this and other technology packaged up and distributed for practical use by election IT groups. I'm not sure how we'll fill that one of many holes left by Jeff's departure.

On another occasion, we had a sudden need for appliance-based demo system for another important function in elections -- tabulating votes. A tabulator is (or should be) a dedicated system component of a voting system, whose sole purpose is to aggregate vote counts, and rack them all up. It's tempting to think of it as a fancy adding machine, but in reality, 99% of the work is consistency and completeness checking on the data coming in, to make sure that the election result data coming out is actually valid. We were embarking on the first ever trial of a voting system component that was standalone (not wired into a monolithic voting system product), and open-data based, and able to be trial-evaluated in a process that we can define in collaboration with the EAC. Jeff stepped right up, defined the open-data formats, cranked the application code in addition to the appliance platform technology, created test data, documentation, and more. A real quality job. Again, one of our hopes for 2013 is to get the funding to support a trial evaluation, and do so using the new data interchanges being defined by NIST and IEEE. If we can do that, it will be bittersweet to do it without Jeff, but I am confident that new folks can pick up his work, because he was so meticulous.

Jeff also led the software development work on a new component of the TrustTheVote suite, which we created in response to new demand from election officials. The TTV Analytics component, though basic in its first version, is actually the first part of TTV to go from zero to deployment and production use by election officials, in less than a year. The work was similar to the TTV Tabulator, in terms of aggregating a bunch of data, but in this case it was not about ballots, but about voter registration and absentee voter administration. The idea is that we should be able to take logs from disparate systems, whack them together, and be able have anonymized records of every administrative action on every voter record for a whole election cycle, and able runs stats and reports that would show voter outcomes sliced every which way, including demographics to show whether in some part of the state, voters of a particular demographic had skewed outcomes. It's a great idea, with lots of legs, and a cool plan for version 2 this year. We'll be doing that, and standardizing the common data formats, and setting up a public access demo system (so you can see how it works for election officials), and so on -- all without the guy who stood up and said "I don't know anything about voter record management and transactions and reporting, but I'll figure out it, challenge you on the specs and data formats, work it all out, learn another new language and platform, write the core code and the UI, package it for open-source distribution, and work with our Virginia SBE adopter's cloud hosting service to get it deployed." He never said it in so many words like that, more like "OK, let's get started!" but that's what it meant, and that's what he did.

So facing the planning of our work in 2013 and into the 2014 election cycle, we're all very much in Jeff's debt, and we'll feel his absence keenly as we continue the work he was engaged in.

But of course it is much more than a gap in the TTV team. And it is much more than OSDV and TrustTheVote. For me, it's a loss of a friend and colleague that I've known for half my life, and the majority of my career. No one knows better than I do what we could have achieved together in the next growth phase of TTV, and what these achievements could have done for the election officials who need the new technology. We'll do the work without him, and we'll really miss him, sure. But my own labor and experience will be the poorer for the lack of the continuing example of energy and commitment that Jeff provided; and just as much for not being able to ask him what's he cooking for dinner tonight, how he got the idea or the recipe to tweak, and what I can do in my kitchen tonight to add a new twist, or what his latest art piece is like, and to hear his enthusiasm and gusto for everything that he did and made, with bits or with atoms, every day, morning, noon, and night.

I won’t say rest in peace, because it’s hard to imagine a guy with so much energy for so many different things, actually resting – but I have to say: thank you, and good-bye.

-- John Sebes



Sequel to A.G. Holder "Fix That" -- Can Do!

In my last post, I said that we might be onto something, an idea for many of the benefits of universal automatic permanent voter registration, without the need for Federal-plus-50-states overhaul of policy, election law, and election technology that would be required for actual UAP VR. Here is a sketch of what that might be. I think it's interesting not because of being complex or clever -- which it is not -- but because it is sufficiently simple and simple-minded that it might feasibly be used by real election officials who don't have the luxury to spend money to make significant changes to their election administration systems. (By the way, if you're not into tales of information processing systems, feel free to skip to the punchline in the last paragraph.) Furthermore -- and this is critical -- this idea is simple enough that a proof of concept system could be put into place quite quickly and cheaply. And in election tech today, that's critical. To paraphrase the "show me" that we hear often: don't just tell me ideas for election tech improvements; show me something I can see, touch, and try, that shows that it would work in my current circumstances. With input from some election officials about what they'd need, and what that "show me" would be, here is the basic idea ...

The co-ordination of existing databases that A.G. Holder called for would actually be a new system, a "federated database" that does not try to coordinate every VR status change of every person, but instead enables a best-efforts distribution of advisory information from various government organizations, to participating election officials who work on those two important principles that I explained in my last post. This is not a clearing-house, not a records matching system, but just something that distributes info about events.

Before I explain what the events could be and how the sharing happens, let me bracket the issue of privacy. Of course all of this should be done in a privacy-protecting way with anonymized data, and of course that's possible. But whenever I say "a person with a DOB of X" or something like that, remember that I am really talking about some DOB that is one-way-hashed for privacy. Secondly, for the sake of simple explanation, I'm assuming that SSN and DOB can be used as a good-enough nearly-unique identifier for these purposes, but the scheme works pretty much the same with other choices of identifying information. (By the way, I say nearly-unique because it is not uncommon for a VR database to have two people with the same SSN because of data-entry typos, hand-writing issues, and so forth.)

To explain this system, I'll call it "Holder" both because of the A.G. and because I like the idea that everything in this system is a placeholder for possible VR changes, rather than anything authoratative. And because this is a Federal policy goal, I'll tell a story that involves Federal activity to share information with states -- and also because right now that's one of the sources of info that states don't actually have today!

Now, suppose that every time a Federal agency -- say the IRS or HHS -- did a transaction with a person, and that involved the person's address, that agency posts a notification into "Holder" that says that on date D, a person with SSN and DOB of X and Y claimed a current address of Z. This is just a statement of what the agency said the person said, and isn't trying to be a change-of-address. And it might, but needn't always, include an indication of what type of transaction occurred. The non-authoratative part is important. Suppose there's a record where the X and Y match a registered voter Claire Cornucopia of 1000 Chapel St., New Haven CT, but the address in not in CT. The notification might indicate a change of address, but it might be a mistake too. Just today I got mail from of government organization that had initially sent it to a friend of mine in another state. Stuff happens.

State VR operators could access "Holder" to examine this stream of notifications to find cases where it seems to be about a voter that is currently registered in that state, or isn't but possibly should be. If there is a notification that looks like a new address for an existing voter, then they can reach out to the voter -- for example, email, postal mail to the current address on file, postal mail to the possibly new address. In keeping with current U.S. practice:

  • it is up the voter to maintain their voter record;
  • election officials must update a record when a voter sends a change;
  • without info from a voter, election officials can change a record only in specific ways authorized by state election law.

The point here is to make it easier for election officials to find out that a person might ought take some action, and to help that person do so. The helping part is a separate matter, including online voter services, but conceivably, this type of system would work (albeit with a lower participation rate) in system limited to postal mail to voters asking them to fill out a paper form and mail it back.

Next, let's imagine the scenarios that this system might enable, in terms of the kinds of outreach that a voter could receive, not limited to change of address as I described above.

  1. "Hey, it looks like you changed your mailing address - does that mean that you changed your residence too? If so, here is how you should update your voter record …"
  2. "Hey, it looks like you now live in the state of XX but aren't registered to vote - if so, here is what you should do to find out if you're eligible to vote … …"
  3. "Hey, it looks like you just signed up for selective service - so you are probably eligible to vote too, and here is what you should do …"

Number 3 -- and other variations I am sure you can think of -- is especially important as a way to approximate the "automatic" part of A.G. Holder's policy recommendation, while number 1 is the "permanent" part, and number 2 is part of both.

With just a little trial-ballooning to date, I fairly confident that this "Holder" idea would complement existing VR database maintenace work, and has the potential to connect election officials with a larger number of people than they currently connect with. And I know for sure that this does not require election officials to change the existing way that they manage voter records. But how about technical feasibility, cost, and so on. Could it pass the "show me" test?

Absolutely, yes. We've done some preliminary work on this is, and it is the work of a few weeks to set up the federated database, and the demo systems that show how Federal and state organizations would interact with it. But I don't mean that it would be a sketchy demo. In fact, because the basic concept is so simple, it would be a nearly complete software implementation of the federated database and all the interactions with it. Hypothetically, if there were a Federal organization that would operate "Holder", and enough states that agreed that its interface met their needs for getting started, a real "Holder" system could be set up as quickly as that organization could amend a services agreement with one of its existing I.T. service provider organizations, and set up MOUs with other Federal agencies.

Which is of course, not exactly "quick" but the point is that the show-me demonstrates this the enabling technology exists in an immediately usable (and budgetable) form, to justify embarking on the other 99% of the work that is not technology work. Indeed, you almost have to have the tech part finished, before you can even consider the rest of it; an idea by itself will not do.

Lastly, is this reasonable or are we dreaming again? Well, let's charitably say that we are dreaming the same voting rights dream that A.G. Holder has, and we're here to say from the standpoint of election technology, that we could do the tech part nearly overnight, in a way that enables adoption that requires much administrative activity, but not legal or legislative activity. For techies, that's not much of a punchline, but for policy folks who want to "fix that" quickly, it may be a very pleasant surprise.

-- EJS



Election Tech "R" Us - and Interesting Related IP News

Good Evening-- On this election night, I can't resist pointing out the irony of the USPTO's news of the day for Election Day earlier: "Patenting Your Vote," a nice little article about patents on voting technology.  It's also a nice complement to our recent posting on the other form of intellectual property protection on election technology -- trade secrets.  In fact, there is some interesting news of the day about how intellectual property protections won't (as some feared) inhibit the use of election technology in Florida.

For recent readers, let's be clear again about what election technology is, and our mission. Election technology is any form of computing -- "software 'n' stuff" -- used by election officials to carry out their administrative duties (like voter registration databases), or by voters to cast a ballot (like an opscan machine for recording votes off of a paper ballot), or by election officials to prepare for an election (like defining ballots), or to conduct an election (like scanning absentee ballots), or to inform the public (like election results reporting). That covers a lot of ground for "election technology."

With the definition, it's reasonable to say that "Election Technology 'R' Us" is what the TrustTheVote Project is about, and why the OSDV Foundation exists to support it.  And about intellectual property protection?   I think we're clear on the pros and cons:

  • CON: trade secrets and software licenses that protect them. These create "black box" for-profit election technology that seems to decrease rather than increase public confidence.
  • PRO: open source software licenses. These enable government organizations to [A] adopt election technology with a well-defined legal framework, without which the adoption cannot happen; and [B] enjoy the fruits of the perpetual harvest made possible by virtue of open source efforts.
  • PRO: patent applications on election technology.  As in today's news, the USPTO can make clear which aspects of voting technology can or can't be protected with patents that could inhibit election officials from using the technology, or require them to pay licensing fees.
  • ZERO SUM: granted patents on techniques or business processes (used in election administration or the conduct of elections) in favor of for-profit companies.  Downside: can increase costs of election technology adoption by governments. Upside: if the companies do have something innovative, they are entitled to I.P. protection, and it may motivate investment in innovation.  Downside: we haven't actually seen much innovation by voting system product vendors, or contract software development organizations used by election administration organizations.
  • PRO: granted patents to non-profit organizations.  To the extent that there are innovations that non-profits come up with, patents can be used to protect the innovations so that for-profits can't nab the I.P., and charge license fees back to governments running open source software that embodies the innovations.

All that stated, the practical upshot as of today seems to be this: there isn't much innovation in election technology, and that may be why for-profits try to use trade secret protection rather than patents.

That underscores our practical view at the TrustTheVote Project: a lot of election technology isn't actually hard, but rather simply detailed and burdensome to get right -- a burden beyond the scope of all but a few do-it-ourself elections offices' I.T. groups.

That's why our "Election Technology 'R' Us" role is to understand what the real election officials actually need, and then to (please pardon me) "Git 'er done."

What we're "getting done" is the derivation of blue prints and reference implementations of an elections technology framework that can be adopted, adapted, and deployed by any jurisdiction with common open data formats, processes, and verification and accountability loops designed-in from the get-go.  This derivation is based on the collective input of elections experts nationwide, from every jurisdiction and every political process point of view.  And the real beauty: whereas no single jurisdiction could possibly ever afford (in terms of resources, time or money) to achieve this on their own, by virtue of the collective effort, they can because everyone benefits -- not just from the initial outcomes, but from the on-going improvements and innovations contributed by all.

We believe (and so do the many who support this effort) that the public benefit is obvious and enormous: from every citizen who deserve their ballots counted as cast, to every local election official who must have an elections management service layer with complete fault tolerance in a transparent, secure, and verifiable manner.

From what we've been told, this certainly lifts a load of responsibility off the shoulders of elections officials and allows it to be more comfortably distributed.  But what's more, regardless of how our efforts may lighten their burden, the enlightenment that comes from this clearinghouse effect is of enormous benefit to everyone by itself.

So, at the end of the day, what we all benefit from is a way forward for publicly owned critical democracy infrastructure.  That is, that "thing" in our process of democracy that causes long lines and insecurities, which the President noted we need to fix during his victory speech tonight.  Sure, its about a lot of process.  But where there will inevitably be technology involved, well that would be the TrustTheVote Project.




Bedrock 2: Adventures at the Board of Elections

Today, we'll continue our illustrative story of elections -- and as in the first installment of the story, we'll keep it simple with the setting in the Town of Bedrock. As we tune in, we find Fred Flintstone in downtown Bedrock at the offices of Cobblestone County's Bedrock Board of Elections (BBoE). He's checking up on the rumor that Mayor Flint Eastrock has resigned, and that there is a Special Mayoral Election scheduled. Asking BBoE staffer Rocky Stonerman, Rocky replies, "Of course Fred! Just check the BBoE's public slab-site." Going back outside the BBoE offices, he checks the public slab-site, and sure enough there is a newly posted slab announcing the election. Fred tells Rocky he'd like to run, and Rocky explains how Fred needs to apply as a candidate, and what the eligibility rules are. "Fred, I'll tell you straight up, don't bother to fill out the application, you're not eligible because you're a Quarry Commissioner. If you want to run for Mayor, you'll need to resign first, and then apply as a mayoral candidate."

"Yabba dabba doo - that's what I'm here to do!" Forms and formalities of resignation then taken care of, Rocky gives Fred an application slab and chisel, and then grabs a chisel and runs out to update the Upcoming Elections slab page to include information about the contest for Quarry Commission, Seat #2. In the meantime, Fred has finished his application, and hands it in when Rocky returns. "Did you put me on the candidate list?"

"Of course not, Fred. We have to process your application! Best bet is to come down tomorrow -- I'm going to have to pull your voter record from our voter record tablet-base system. And I've got to tell you, it'll take a while -- the VRTB has thousands of records. We're still running on unsupported old ScryBase system! Wish we had funding to upgrade but not so far. Petro tells me the we should look at an open-stone MyScryql system, and …"

Not so interested in Rocky and Petro's slabs and tablets, Fred interrupts, "And what about that referendum?" Rocky replies, "Oh yes! The Quarry Courier brought over an application yesterday, but it didn't have all the commissioners' signatures on the application. You probably want to get the commission to fix that -- unless you want to go the petition route, though you'd need 300 signatures and frankly I don't know if our TBMS has room, because …"

Having heard more than enough about stone-age election technology for one day, Fred beats a hasty retreat. Tune in for the next installment, to find out if Fred actually gets to run for mayor.

-- EJS



Open-Source Election Software Hosting -- What Works

Putting an open source application into service - or "deployment" - can be different from deploying proprietary software. What works, and what doesn't? That's a question that's come up several times in the few weeks, as the TTV team has been working hard on several proposals for new projects in 2011. Based on our experiences in 2009-10, here is what we've been saying about deployment of election technology that is not a core part of ballot certified casting/counting systems, but is part of the great range of other types of election technology: data management solutions for managing election definitions, candidates, voter registration, voter records, pollbooks and e-pollbooks, election results, and more - and reporting and publishing the data. For proprietary solutions - off the shelf, or with customization and professional services, or even purely custom applications like many voter record systems in use today - deployment is most often the responsibility of the vendor. The vendor puts the software into the environment chosen by the customer -- state or local election officials - ranging from the customer's IT plant to outsourced hosting to the vendor's offering of an managed service in an application-service-provider approach. All have distinct benefits, but share the drawback of "vendor lock-in."

What about open-source election software? There are several approaches that can work, depending the nature of the data being managed, and the level of complexity in the IT shop of the election officials. For today, here is one approach that has worked for us.

What works: outsourced hosting, where a system integrator (SI) manages outsourced hosting. For our 2010 project for VA's FVAP solution, the project was led by an SI that managed the solution development and deployment, providing outsourced application hosting and support. The open-source software included a custom Web front-end to existing open-source election data management software that was customized to VA's existing data formats for voters and ballots. This arrangement worked well because the people who developed the custom front-end software also performed the deployment on a system completely under their control. VA's UOCAVA voters benefited from the voter service blank-ballot distribution, while the VA state board of elections was involved mainly by consuming reports and statistics about the system's operation.

That model works, but not in every situation. In the VA case, this model also constrained the way that the blank ballot distribution system worked. In this case, the system did not contain personal private information -- VA-provided voter records were "scrubbed". As a result, it was OK for the system's limited database to reside in a commercial hosting center outside of the the direct control of election officials. The deployment approach was chosen first, and it constrained the nature of the Web application.

The constraint arose because the FVAP solution allowed voters to mark ballots digitally (before printing and returning by post or express mail). Therefore it was essential that the ballot-marking be performed solely on the voter's PC, which absolutely no visibility by the server software running in the commercial datacenter. Otherwise, each specific voter's choices would be visible to a commercial enterprise -- clearly violating ballot secrecy. The VA approach was a contrast to some other approaches in which a voter's choices were sent over the Internet to a server which prepared a ballot document for the voter. To put it another way …

What doesn't work: hosting of government-privileged data. In the case of the FVAP solution, this would have been outsourced hosting of a system that had visibility on the ultimate in election-related sensitive data: voters' ballot choices.

What works: engaged IT group. A final ingredient in this successful recipe was engagement of a robust IT organization at the state board of elections. The VA system was very data-intensive during setup, with large amounts of data from legacy systems. The involvement of VA SBE IT staff was essential to get the job done on the process of dumping the data, scrubbing and re-organizing it, checking it, and loading it into the FVAP solution -- and doing this several times as the project progressed to the point where voter and ballot data were fixed.

To sum up what worked:

  • data that was OK to be outside direct control of government officials;
  • government IT staff engaged in the project so that it was not a "transom toss" of legacy data;
  • development and deployment managed by a government-oriented SI;
  • deployment into a hosted environment that met the SI's exact specifications for hosting the data management system.

That recipe worked well in this case, and I think would apply quite well for other situations with the same characteristics. In other situations, other models can work. What are those other models, or recipes? Another day, another blog on another recipe.

-- EJS



King's Mighty Stream, Re-Visited

As I often do, I had a thoughtful Martin Luther King Day -- as you can see from my still pondering a couple days later. But I think I now have something to share. Last time I wrote on MLK, I likened two unlikely things:

  • King's demand for social justice and peace, using Isaiah's prophetic words that "Justice shall roll down like water, and righteousness like a mighty stream."
  • My vision of really meaningful election transparency, stemming from a mighty torrent of data that details everything that happened in a county's conduct of an election, published in a form anyone can see, and can use to check whether the election outcomes are actually supported by the data.

ybg_web_3Still a bit of a stretch, no doubt, because since my little moment by the waterfalls of the MLK memorial in San Francisco, I've had rather mixed success in explaining why this kind of transparency is so difficult. Among the reasons are the complexity of the data, and the very inconvenient way it is locked up inside voting system products and proprietary data formats.

RubeGoldbergOlafBut perhaps more important, it is just a vexingly detailed and complicated process to administer elections and conduct voting and counting -- paradoxically made even more complex with the addition of new technology. (Just ask a New York state election admin person about 2010.) In some cases, I am sure that local election officials would not take umbrage at the phrase "Rube Goldberg Machine" to describe the whole passle of people, process, and tools.

So, among my new year's resolutions, I am going to try to communicate, by example, a large part of the scope of data and transparency that is needed in U.S. elections. It will take some time to do in small digestible blogs, but I hope the example will serve to illustrate several things:

  • What election administration is really like;
  • What kinds of information and operations are used;
  • How a regular process of capturing and exposing the information can prevent some of the mishaps, doubts, and litigation you've often read about here.
  • Last but not least, how the resulting transparency connects directly to the nuts-and-bolts election technology work that we are doing on vote tabulation and on digital pollbooks.

One challenge will be keeping the example at an artificially small scale, for comprehensibility, while still providing meaningful examples of the data and the election officials' work to use it. On that point especially, feedback will be particularly welcome!

-- EJS



Dude, What's My Ballot?

(Part 1 of 2: What's My Ballot?) Having recently written about my CA primary voting experience, now is a good time to compare and contrast with some of the overseas-voter Internet voting pilots. The previous question "Where's My Ballot?" applies just as well, but in some cases, we also have the question "What is my ballot?"

Starting with a recap of my polling place experience, let's compare based on what we have at the end of the day. After a grueling +16-hour day of community service at the fire station up on Middlefield Road, poll workers packed up ballots in two different forms. First, there is a funky little computer called the Hart JBC. Most of the ballots were bits stored on disk in the JBC, and also represented as paper trails on paper rolls that the poll workers detached from the DRE voting machines. Second, some of the ballots were hand marked paper ballots in a ballot box. (I would prefer that the paper ballots have an electronic backup representation, via polling place optical scan, so that every ballot is represented both digitally and physically.)

Now let's compare with the end of the day result for the more common of two current Internet-based voting schemes for overseas voters -- fax or email return of marked vote-by-mail ballots. For those who love to hate these schemes, say what you like about the shortcomings, but it's clear what we have at the end of the day. For some weeks prior to election day, fax machines have been spitting out arriving faxed ballots (along with absentee voter attestation documents), and printers have been spitting out arriving email attachments (ballot and attestation). The attestations were reviewed, and for each acceptable attestation, the corresponding ballot was set separated and set aside for counting. On election day, those ballots were counted, similarly or even exactly the same as the absentee ballots that arrived via USPS or FedEx.

In fact, that set-aside pile of admissible ballots isn't all that different than the ballot box full of hand-marked paper ballots from from the Middlefield Road fire station. It's quite clear what the ballot is, and where it is.

Now to be fair, let me mention that fax and email ballot return schemes have notable problems that we don't have at the fire station: secret ballot and voter anonymity are kaput, and there are loads of ways that ballots can be tampered with. There's more to say about that, and we have already but let's stay focused on the comparison: the what and where of the ballots are as clear as is my in-person voting experience, and perhaps even clearer for this election where I voting on DRE. That clarity and ease of understanding -- both the what and where -- is one reason why I believe email and fax return are as popular as they are, because familiarity and comprehension breed trust more strongly than trust is diluted by security experts pointing out risks.

Next up: In Part 2, I'll provide a similar comparison to another form of Internet-based voting: home-based client-server Internet voting, which I like to call "voting a la surveymonkey." Stay tuned …

-- EJS



NYT on E-mail Voting

The New York Times' Ian Urbina recently wrote an interesting article on various states' activities around allowing email as a means for return of marked ballots from overseas and military voters. I recommend reading it for a number of reasons, but especially because of the large number of rather nuanced issues that Urbina touches on in the space of a relatively brief article. My top picks:

  • "Internet voting" versus "email voting" - term "Internet voting" covers a lot of ground, including variants where there no absentee ballot per se, but instead, some surveymonkey-style screen clicks and browser action to translate the clicks into bits in an HTTP POST operation.
  • "email voting" versus "casting ballot" - email voting includes cases where there is physical absentee ballot that the voter scans in order to email it; when they email the image, is the ballot "cast"? Could be, but is "casting" the point of email?
  • "casting ballot" versus "transporting ballot" - email is fundamentally a means for transporting blobs of text that can be augmented with attached files. When voters use email, are they transporting their ballot back to the BOE, or casting their ballot?

Angels could dance on heads of pins indefinitely here, but this dance-ability does show how the conversation can get quite murky quite easily. Another point of subtlety is that parts of what Urbina reports on are pilot efforts on email return, while others are email return as a bona fide standing practice; and some efforts are not about email voting at all, but rather kiosk-based remote voting.

Once you read the article, you'll have a much better idea why discussion of this topic is likely to get more confused, and confusing, before it gets any clearer.

-- EJS



EAC Guidelines for Overseas Voting Pilots

election-assistance-commissionLast Friday was a busy day for the Federal Elections Assistance Commission.  They issued their Report to Congress on efforts to establish guidelines for remote voting systems.  And they closed their comment period at 4:00pm for the public to submit feedback on their draft Pilot Program Testing Requirements. This is being driven by the MOVE Act implementation mandates, which we have covered previously here (and summarized again below).  I want to offer a comment or two on the 300+ page report to Congress and the Pilot program guidelines for which we submitted some brief comments, most of which reflected the comments submitted by ACCURATE, friends and advisers of the OSDV Foundation.

To be sure, the size of the Congressional Report is due to the volume of content in the Appendices including the full text of the Pilot Program Testing Requirements, the NIST System Security Guidelines, a range of example EAC processing and compliance documents, and some other useful exhibits.

Why Do We Care? The TrustTheVote Project’s open source elections and voting systems framework includes several components useful to configuring a remote ballot delivery service for overseas voters.  And the MOVE Act, which updates existing federal regulations intended to ensure voters stationed or residing (not visiting) abroad can participate in elections at home.

A Quick Review of the Overseas Voting Issue The Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) protects the absentee voting rights for U.S. Citizens, including active members of the uniformed services and the merchant marines, and their spouses and dependents who are away from their place of legal voting residence.  It also protects the voting rights of U.S. civilians living overseas.  Election administrators are charged with ensuring that each UOCAVA voter can exercise their right to cast a ballot.  In order to fulfill this responsibility, election officials must provide a variety of means to obtain information about voter registration and voting procedures, and to receive and return their ballots.  (As a side note, UOCAVA also establishes requirements for reporting statistics on the effectiveness these mechanisms to the EAC.)

What Motivated the Congressional Report? The MOVE (Military and Overseas Voting Enhancement) Act, which became law last fall, is intended to bring UOCAVA into the digital age.  Essentially it mandates a digital means to deliver a blank ballot. 

Note: the law is silent on a digital means to return prepared ballots, although several jurisdictions are already asking the obvious question:  "Why improve only half the round trip of an overseas ballot casting?"

And accordingly, some Pilot programs for MOVE Act implementation are contemplating the ability to return prepared ballots.  Regardless, there are many considerations in deploying such systems, and given that the EAC is allocating supporting funds to help States implement the mandates of the MOVE Act, they are charged with ensuring that those monies are allocated for programs adhering to guidelines they promulgate.  I see it as a "checks and balances" effort to ensure EAC funding is not spent on system failures that put UOCAVA voters participation at risk of disenfranchisement.

And this is reasonable given the MOVE Act intent.  After all, in order to streamline the process of absentee voting and to ensure that UOCAVA voters are not adversely impacted by the transit delays involved due to the difficulty of mail delivery around the world, technology can be used to facilitate overseas absentee voting in many ways from managing voter registration to balloting, and notably for our purposes:

  • Distributing blank ballots;
  • Returning prepared ballots;
  • Providing for tracking ballot progress or status; and
  • Compiling statistics for UOCAVA-mandated reports.

The reality is, however, systems deployed to provide these capabilities face a variety of threats.  If technology solutions are not developed or chosen so as to be configured and managed using guidelines commensurate with the importance of the services provided and the sensitivity of the data involved, a system compromise could carry severe consequences for the integrity of the election, or the confidentiality of sensitive voter information.

The EAC was therefore compelled to prepare Guidelines, report to Congress, and establish (at least) voluntary guidelines.  And so we commented on those Guidelines, as did colleagues of ours from other organizations.

What We Said - In a Nutshell Due to the very short comment period, we were unable to dive into the depth and breadth of the Testing Requirements.  And that’s a matter for another commentary.  Nevertheless, here are the highlights of the main points we offered.

Our comments were developed in consultation with ACCURATE; they consisted of (a) underlining a few of the ACCURATE comments that we believed were most important from our viewpoint; (b) the addition of a few suggestions for how Pilots should be designed or conducted.  Among the ACCURATE comments, we underscored:

  • The need for a Pilot's voting method to include a robust paper record, as well as complementary data, that can be used to audit the results of the pilot.
  • Development of, and publication of security specifications that are testable.

In addition, we recommended:

  • Development of a semi-formal threat model, and comparison of it to threats of one or more existing voting methods.
  • Testing in a mock election, in which members of the public can gain understanding of the mechanisms of the pilot, and perform experimentation and testing (including security testing), without impacting an actual election.
  • Auditing of the technical operations of the Pilot (including data center operations), publication of audit results, and development of a means of cost accounting for the cost of operating the pilot.
  • Publication of ballots data, cast vote records, and results of auditing them, but without compromising the anonymity of the voter and the ballot.
  • Post-facto reporting on means and limits of scaling the size of the pilot.

You can bet this won't be the last we'll hear about MOVE Act Pilots issues; I think its just the 2nd inning of an interesting ball game... GAM|out



Internet Voting - Is it Ever Possible? Should it be?

In the aftermath of the Internet debate, notwithstanding my goof on forgetting to toss a coin to determine which team went last with their closing argument, the noise has settled down a bit.  I'm not sure if this is really the prevailing opinion, but if I ventured a guess, I would have to say most concluded we actually did do our best to run a fair debate. However, our measure of real success should not be made on the yardstick of fairness and balance.  Rather, my hope is that this first sponsored, official (some even stretch to suggest "historical") debate will catalyze an intellectually honest on-going conversation, and several more official debates as headway is made on this topic.  So, may our success hopefully be measured in how many more such like-kind events we catalyze.  And that may be more important than we realize, because...

...this issue is not going away.

That much came clear during the Overseas Vote Summit this past week.  Whether its the Department of Defense, who is (and should be) determined to provide a digital means to empower their service men and women to participate in the democracy they fight for, or whether its forward-thinking States realizing the Internet revolution is encroaching on and threatening to re-invent another legacy space: elections, this topic of using the Internet in public elections will not disappear.

All who are against this (including the OSDV Foundation as to certain aspects) can wring our hands, gnash our teeth, and fret and fight.  Or we can advance the discussion and seek out ways to determine if this digital means of casting ballots can ever be a reality by pushing research, running experiments, and conducting pilots.  I think the latter approach is imperative because its doubtful we can fully stop this determination, unless hundreds of thousands of dollars are poured into lobbying, and hundreds of thousands are called to activism.  And I believe there is a better way to spend that precious money.  For one thing, even if all the activism in the world raised so much noise that Internet Voting became the 3rd rail of politics, it wouldn't stop certain groups from moving ahead (think: Defense Department).  If that's the case, then let's spend our resources on figuring out how to make it (use of the Internet in public elections) as safe as possible.

Unfortunately, one of the realities that emerged this past week is that the challenges are not primarily technical in nature.  To be sure, there are some tough technical problems, but they might seem a bit more surmountable in light of perhaps the real primary issue:  cultural philosophy.

I had the pleasure of speaking at length with most of the panelists after the Debate over the remainder of the Conference.  And here is what came of those discussions.  There seems to be some consensus around the fact that America is nowhere near where Europe (at least) is on 2-3 critical attitudes or process issues.  Plus, there is a unique fourth concern, although probably addressable.  And here they are:

  1. Voter Identification/Digital Authentication.  First, as was pointed out by our European Panelists, until America and its citizens reach a point where they are willing to provide for a national identification means, and one that has a digital means for using it as an authentication device (e.g., see the bar code on almost every States' drivers licenses these days, or the mag-stripe on the back of your credit/debit cards -- increasingly with a photo of the card holder), there can be no digital method that would enable a citizen to approach a semi- or unattended device.  We say "semi" because the Europeans suggest CCTV devices (closed circuit TV) to monitor remote kiosks, for example.  (Which is another point, the EU is far and away the most "monitored" continent on the planet.)  However, a national ID is a political nightmare in America; strange as an increasing number of people are beginning to believe.
  2. Inherent Trust in Government.  Second, Europeans tend to have more trust for their government.  American's inherently distrust their own government as a matter of history.  And without that trust quotient in place, there are additional out-of-band checks and balances required to satisfy the public's suspicion on the integrity of elections administrators and other government officials.  (Never mind that in the trenches these are not just elections or government officials, but include citizen volunteers.)  Given the additional requirement of general government trust verification, more impediments to  Internet-based ballot casting are present.
  3. Anonymity and Ballot Secrecy.  Third, and this is not a universal attitude in Europe, but it is a significant minority of opinion, Europeans culturally are less inclined to be as vigilant about the anonymity or privacy of their votes.  I heard mixed opinions on this third cultural challenge -- in some countries it is more important than others.  But there is more willingness to trade that element for the sake of ease of access, support for a more mobile society, and the conveniences afforded by digital means to serve ballots, provide for clear marking void of uncertainties that give rise to recount or rejection, and the reduction of paper handling, travel, or postage where mails are allowed.  In the U.S., of course, the anonymity of the voter and secrecy of the ballot (as to who cast it) is a hallmark imperative of our electoral process.  And here too, we're beginning to hear a change in tone amongst younger voters about the importance of that aspect.
  4. Constitutionally Mandated Orderly Transfer of Power.  The final aspect, which is not a cultural artifact but a structural aspect of our democracy cast into the Constitution, concerns the orderly transfer of power and the untenable risk of an election fault that halts the electoral process and risks this orderly transfer (for the office of U.S. President) by 20.January following a general national election.  The scenario is where an Internet catastrophe of some (presumably nefarious) sort seriously disrupts an election in at least one jurisdiction.  As the Hon. Secretary Bowen (CA) asked in her closing address, "So, suppose we successfully do detect an attack at scale during an election being run across the Internet, then what?"  And indeed, this is an academically engaging question.  However, the answer seems to be (and this is why it is a "sort of" potential show stopping fourth aspect), what happens next is very likely the same thing that would happen if a Katrina-class hurricane struck New Orleans on that particular first Tuesday in November (note: hurricane season runs into November down there), or perhaps a major show stopping earthquake struck Los Angeles county around 10:00AM that Tuesday morning.  In other words, whatever plan is in place for a catastrophe, whether natural or man-made (and in the latter case, negligent or malicious), I presume (worth verifying) that it would equally apply in this scenario.  In other words, there must be some crisis plan (that may be defined by law or regulation or handed to the Judiciary) to address the fact that (potentially tens-of-thousands of) voters have been prevented from casting ballots -- regardless of the cause, but through no fault of the voters.  I have to believe we have contingencies to avoid such a constitutional crisis, regardless of the nature of the disruption on election day, and that such would cover an Internet born crisis as well.

So, perhaps the 4th issue is addressable and can be set aside for the moment.  As to the remaining 2-3 cultural-class issues, we older generation, some of us far closer to historical corruption or the witnessing of coercion in eras gone by, are more likely to bristle at the thought that anyone dare suggest we evolve our electoral philosophies.  But, the fact remains, a new generation is rising up.  And how many of this new era will one day run the state houses of this nation with a completely different attitude of the priorities in elections run in a digital age?

Let's be clear, there are several very difficult technical challenges to making an Internet-based ballot casting system work, but they may be over-shadowed by even thornier non-technical problems.

As I reflect on last week's Internet Voting debate, it is increasingly clear that addressing the technical challenges may, after all, be simpler than addressing the socio-political challenges of voter authentication, secrecy of the ballot, and inherent trust in government officials in running elections.

And there's no app for that.