Viewing entries tagged
election technology

Comment

A Hacked Case For Election Technology

A credible election technology company makes an incredible assertion, and the result is our CTO hot-in-pursuit of some intellectual honesty.  The good news: the conversation is growing on the emerging issue of America's crumbling election technology infrastructure.  The bad news: articles like the one reviewed by our CTO, particularly when published by a respectable online scientific journal create a "reality distortion field" resulting in "sound-bytes" that can mislead policy makers, politicians, and less informed pundits.  Result: degradation of the signal to noise ratio and a hacked case for election technology.  Read on, for a dose of intellectual honesty from our Chief election technologist...

Comment

Comment

NCSL Convenes Policy & Election Technology Summit

NCSL Conference on Policy and Elections Technology is in full swing in Santa Fe, New Mexico.  Our Chief Development Officer is set to participate in an interesting panel on the future of elections technology in a post-HAVA funded world.  We have a position document responding to several questions posed to us in advance of the conference available for download...

Comment

Comment

The Moose Lurking in the Room

To hec with the elephant (regardless of who you think will control Congress after election day), the real beast in the room may be a Moose -- Alaska style.  Our CTO notes an article from yesterday that points out how Alaska's close U.S. senatorial race, combined with their allowing ballots to be digitally returned across the Internet, may pose the greatest threat to a derailed election we've seen yet. 

But the real point John makes is that sadly, Alaskan voters may not even be aware of the risks and who in this case is watching over their ballots -- at least those returned in the inherently insecure manner of the Internet, no matter how "secure" the "experts" are claiming the process to be.  If the ballot return system in Alaska were truly as secure as their vendor claims, then Banks would be using their methods, and the massive amounts of hacked customer personal information at major brands this year might have been alleviated.  Have a look and give us your take.

Comment

Comment

“Digital Voting”—Don’t believe everything you think

In at recent blog post we examined David Plouffe’s recent Wall Street Journal forward-looking op-ed [paywall] and rebalanced his vision with some practical reality.

Now, let’s turn to Plouffe’s notion of “digital voting.”  Honestly, that phrase is confusing and vague.  We should know: it catalyzed our name change last year from Open Source Digital Voting Foundation (OSDV) to Open Source Election Technology Foundation (OSET).

Comment

Comment

David Plouffe’s View of the Future of Voting — We Agree and Disagree

David Plouffe, President Obama’s top political and campaign strategist and the mastermind behind the winning 2008 and 2012 campaigns, wrote a forward-looking op-ed [paywall] in the Wall Street Journal recently about the politics of the future and how they might look.

He touched on how technology will continue to change the way campaigns are conducted – more use of mobile devices, even holograms, and more micro-targeting at individuals. But he also mentioned how people might cast their votes in the future, and that is what caught our eye here at the TrustTheVote Project.  There is a considerable chasm to cross between vision and reality.

Comment

Comment

Voting Heartburn over “Heartbleed”

Heartbleed is the latest high-profile consumer Internet security  issue, only a few weeks after the “Goto Fail” incident. Both are recently discovered weaknesses in the way that browsers and Web sites interact. In both cases and others, I’ve seen several comments that connect these security issues with Internet voting. But because Heartbleed is pretty darn wicked, I can’t not share my thoughts on how it connects to the work we do in the TrustTheVote project – despite the fact that i-voting is not part of it. (In fact, we have our hands full fixing the many technology gaps in the types of elections that we already have today and will continue to have for the foreseeable future.)

First off, my thanks to a security colleague Matt Bishop who offered an excellent rant(his term not mine!) on Heartbleed and what we can learn from it, and the connection to open source. The net-net is familiar: computers, software, and networks are fundamentally fallible, there will always be bugs and vulnerabilities, and that’s about as non-negotiable as the law of gravity.

Here is my take on how that observation effects elections, and specifically the choice that many many U.S. election officials have made (and which we support), that elections should be based on durable paper ballots that can be routinely audited as a cross check on potential errors in automated ballot counting. It goes like this:

  • Dang it, too many paper ballots with too many contests, to count manually.
  • We’ll have to use computers to count the paper ballots.
  • Dang it, computers and software are inherently untrustworthy.
  • Soooo ….  we’ll use sound statistical auditing methods to manually check the paper ballots, in order to check the work of the machines and detect their malfunctions.

This follows the lessons of the post-hanging-chads era:

  • Dang it, too many paper ballots with too many contests, to count manually.
  • We’ll have to use computers to directly record votes, and ditch the paper ballots.
  • Dang it, computers and software are inherently untrustworthy.
  • Oops, I guess we need the paper ballots after all.

I think that these sequences are very familiar to most readers here, but its worth a reminder now and then from experts on the 3rd point – particularly when the perennial topic of i-voting comes up– because there, the sequence is so similar yet so different:

  • Dang it, voters too far away for us to get their paper ballots in time to count them.
  • We’ll have to use computers and networks to receive digital ballots.
  • Dang it, computers and software and networks are inherently untrustworthy.
  • Soooo …. Oops.

– EJS

Comment

Comment

Money Shot: What Does a $40M Bet on Scytl Mean?

…not much we think.

Yesterday’s news of Microsoft co-founder billionaire Paul Allen’s investing $40M in the Spanish election technology company Scytl is validation that elections remain a backwater of innovation in the digital age.

But it is not validation that there is a viable commercial market for voting systems of the size typically attracting venture capitalists; the market is dysfunctional and small and governments continue to be without budget.

And the challenges of building a user-friendly secure online voting system that simultaneously protects the anonymity of the ballot is an interesting problem that only an investor of the stature of Mr. Allen can tackle.

We think this illuminates a larger question:

To what extent should the core technology of the most vital aspect of our Democracy be proprietary and black box, rather than publicly owned and transparent?

To us, that is a threshold public policy question, commercial investment viability issues notwithstanding.

To be sure, it is encouraging to see Vulcan Capital and a visionary like Paul Allen invest in voting technology. The challenges facing a successful elections ecosystem are complex and evolving and we will need the collective genius of the tech industry’s brightest to deliver fundamental innovation.

We at the TrustTheVote Project believe voting is a vital component of our nation’s democracy infrastructure and that American voters expect and deserve a voting experience that’s verifiable, accurate, secure and transparent.  Will Scytl be the way to do so?

The Main Thing

The one thing that stood out to us in the various articles on the investment were Scytl’s comments and assertions of their security with international patents on cryptographic protocols.  We’ve been around the space of INFOSEC for a long time and know a lot of really smart people in the crypto field.  So, we’re curious to learn more about their IP innovations.  And yet that assertion is actually a red herring to us.

Here’s the main thing: transacting ballots over the public packet switched network is not simply about security.   Its also about privacy; that is, the secrecy of the ballot.  Here is an immutable maxim about the digital world of security and privacy: there is an inverse relationship, which holds that as security is increased, privacy must be decreased, and vice-verse.  Just consider any airport security experience.  If you want maximum security then you must surrender a bunch of privacy.  This is the main challenge of transacting ballots across the Internet, and why that transaction is so very different from banking online or looking at your medical record.

And then there is the entire issue of infrastructure.  We continue to harp on this, and still wait for a good answer.  If by their own admissions, the Department of Defense, Google, Target, and dozens of others have challenges securifying their own data centers, how exactly can we be certain that a vendor on a cloud-based service model or an in-house data center of a county or State has any better chance of doing so? Security is an arms race.  Consider the news today about Heartbleed alone.

Oh, and please for the sake of credibility can the marketing machinery stop using the phrase “military grade security?”  There is no such thing.  And it has nothing to do with an increase in the  128-bit encryption standard RSA keys to say, 512 or 1024 bit.  128-bit keys are fine and there is nothing military to it (other than the Military uses it).  Here is an interesting article from some years ago on the sufficiency of current crypto and the related marketing arms race.  Saying “military grade” is meaningless hype.  Besides, the security issues run far beyond the transit of data between machines.

In short, there is much the public should demand to understand from anyone’s security assertions, international patents notwithstanding.  And that goes for us too.

The Bottom Line

While we laud Mr. Allen’s investment in what surely is an interesting problem, no one should think for a moment that this signals some sort of commercial viability or tremendous growth market opportunity.  Nor should anyone assume that throwing money at a problem will necessarily fix it (or deliver us from the backwaters of Government elections I.T.).  Nor should we assume that this somehow validates Scytl’s “model” for “security.”

Perhaps more importantly, while we need lots of attention, research, development and experimentation, the bottom line to us is whether the outcome should be a commercial proprietary black-box result or an open transparent publicly owned result… where the “result” as used here refers to the core technology of casting and counting ballots, and not the viable and necessary commercial business of delivering, deploying and servicing that technology.

Comment

1 Comment

Comments Prepared for Tonight's Elections Technology Roundtable

This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.

We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems.  These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.

Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project

We appreciate an opportunity to participate in this important discussion.  We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.

1. Certification

For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point.  Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model.  This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.

We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system.  There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.

We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California.  Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.

2. Standards

2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component.  Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner.  With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."

The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem.  And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote.  They deserve congratulations for reaching this imperative state of data standards détente.  It's not finished, but the effort and momentum is there.

So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.

3. Commercial Revitalization

Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry.  Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation.  Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.

One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery.  To that point, let us offer an important clarification and dispel a misconception about rolling your own.  This does not mean that elections officials are about to be left to self-vend.  And by that we mean self-construct and support their open, standard, commodity voting system components.  A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type.  Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software.  Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.

Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation.  To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.

In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.

 

EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California.  We don't read SB-360 that way at all.  We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest.  In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed.  We simply need to modernize how certification works to enable this kind of innovation.  We think our comments today bear that out.

Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360.  In sum and substance the agenda is to discuss:

  1. Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
  2. Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
  3. Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.

Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions.  We think this allegation is also misinformed for two reasons: [1] assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and [2] notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned.  From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away.  And indeed, this is what we've witnessed over the years in other jurisdictions.

Onward.

1 Comment

Comment

Not Just Election Night: VoteStream

A rose by any other name would smell as sweet, but if you want people to understand what a software package does, it needs a good name. In our Election Night Reporting System project, we've learned that it's not just about election night, and it's not just about reporting either. Even before election night, a system can convey a great deal of information about an upcoming election and the places and people that will be voting in it. To take a simple example: we've learned that in some jurisdictions, a wealth of voter registration information is available and ready to be reported alongside election results data that will start streaming in on election night from precincts and counties all over.

It's not a "system" either. The technology that we've been building can be used to build a variety of useful systems. It's better perhaps to think of it as a platform for "Election Result Reporting" systems of various kinds. Perhaps the simplest and most useful system to build on this platform is a system that election officials can load with data in a standard format, and which then publishes the aggregated data as an "election results and participation data feed". No web pages, no API, just a data feed, like the widely used (in election land) data feed technique using the Voting Information Project and their data format.

In fact, one of the recent lessons learned, is that the VIP data standard is a really good candidate for an election data standard as well, including:

  • election definitions (it is that already),
  • election results that reference an election definition (needs a little work to get there), and
  • election participation data (a modest extension to election results).

As a result (no pun intended) we're starting work on defining requirements for how to use VIP format in our prototype of the "Election Results Reporting Platform" (ERRP).

But the prototype needs to be a lot more than the ERRP software packaged in to a data feed. It needs to also provide a web services API to the data, and it needs to have a web user interface for ordinary people to use. So we've decided to give the prototype a better name, which for now is "VoteStream".

Our VoteStream prototype shows how ERRP technology can be packaged to create a system that's operated by local election officials (LEOs) to publish election results -- including but not limited to publishing unofficial results data on election night, as the precincts report in. Then, later, the LEOs can expand the data beyond vote counts that say who won or lost. That timely access on election night is important, but just as important is the additional information that can be added during the work in which the total story on election results is put together -- and even more added data after the completion of that "canvass" process.

That's VoteStream. Some other simpler ERRP-based system might be different: perhaps VoteFeed, operated by a state elections organization to collate LEO's data and publish to data hounds, but not to the general public and their browsers. Who knows? We don't, not yet anyhow. We're building the platform (ERRP), and building a prototype (VoteStream) of an LEO-oriented system on the platform.

The obvious next question is: what is all that additional data beyond the winner/loser numbers on election night? We're still learning the answers to that question, and will share more as we go along.

-- EJS

 

Comment

Comment

Election Results Reporting - Assumptions About Software (concluded)

Today, I'll be concluding my description of one area of assumptions in our Election Night Reporting System project -- our assumptions about software. In my last post, I said that our assumptions about software were based on two things: our assumptions about election results data (which I described previously), and the results of the previous, design-centric phase of our ENRS work. Those results consist of two seemingly disparate parts:

  1. the UX design itself, that enables people to ask ENRS questions, and
  2. a web service interface definition, that enable to software to ask ENRS questions.

In case (1), the answer is web pages delivered by a web app. In case (2) the answers are data delivered via an application programming interface (API). 

Exhibit A is our ENRS design website http://design.enrs.trustthevote.org which shows a preliminary UX design for a map-based visualization and navigation of the election results data for the November 2010 election in Travis County, Texas. The basic idea was to present a modest but useful variety way to slice and dice the data, that would be meaningful to ordinary voters and observers of elections. The options include slicing the data at the county level, or the individual precinct level, or in-between, and to filter by one of various different kinds of election results or contests or referenda. Though preliminary, the UX design well received, and it's the basis for current work to do a more complete UX that also provides features for power users (data-heads) without impacting the view of ordinary observers. 

Exhibit B is the application programming interface (API), or for now just one example of it:

http://design.enrs.trustthevote.org/resources/precincts/precinct324/contests/governor

That does not look like a very exciting web page (click it now if you don't believe me!), and a full answer of "what's an API" can wait for another day.

ENRS-API-example

But the point here is that the URL is a way for software to request a very specific slice through a large set of data, and get it in a software-centric digestable way. The URL (which you can see above in the address bar) is the question, and the answer is what you above as the page view. Now, imagine something like your favorite NBA or NFL scoreboard app for your phone, periodically getting updates on how your favorite candidate is doing, and alerting you in a similar way that you get alerts about your favorite sports team. That app asks questions of ENRS, and gets answers, in exactly the way you see above, but of course it is all "under the hood" of the app's user interface.

So, finally, we can re-state the software assumption of our ENRS project:

  • if one can get sufficiently rich election data, unlocked from the source, in a standard format,
  • then one can feasibly develop a lightweight modern cloud-oriented web app, including a web service, that enables election officials to both:
    • help ordinary people understand complex election results data, and
    • help independent software navigate that data, and present it to the public in many ways, far beyond the responsibilities of election officials.

We're trying to prove that assumption, by developing the software -- in our usual open source methodology of course -- in a way that (we hope) provides a model for any tech organization to similarly leverage the same data formats and APIs.

-- EJS

Comment

Comment

Election Results Reporting - Assumptions About Software

Today I'm continuing with the second of a 3-part series about what we at the TrustTheVote Project are hoping to prove in our Election Night Reporting System project.  As I wrote earlier, we have assumptions in three areas, one of which is software. I'll try to put into a nutshell a question that we're working on an answer to:

If you were able to get the raw election results data available in a wonderful format, what types of useful Apps and services could you develop?

OK, that was not exactly the shortest question, and in order to understand what "wonderful format" means, you'd have to read my previous post on Assumptions About Data. But instead, maybe you'd like to take a minute to look at some of the work from our previous phase of ENRS work, where we focused on two seemingly unrelated aspects of ENRS technology:

  1. The user experience (UX) of a Web application that local election officials could provide to help ordinary folks visualize and navigate complex election results information.
  2. A web services API that would enable other folk's systems (not elections officials) to receive and use the data in a manner that's sufficiently flexible for a variety other services ranging from professional data mining to handy mobile apps.

They're related because the end results embodied a set of assumptions about available data.

Now we're seeing that this type of data is available, and we're trying to prove with software prototyping that many people (not just elections organizations, and not just the TrustTheVote Project) could do cool things with that data.

There's a bit more to say -- or rather, to show and tell -- that should fit in one post, so I'll conclude next time.

-- EJS

PS: Oh there is one more small thing: we've had a bit of an "Ah-ha" here in the Core Team, prodded by our peeps on the Project Outreach team.  This data and the apps and services that can leverage that data for all kinds of purposes has use far beyond the night of an election.  And we mentioned that once before, but the ah-ha is that what we're working on is not just about election night results... its about all kinds of election results reporting, any time, any where.  And that means ENRS is really not that good of a code name or acronym.  Watch as "ENRS" morphs into "E2RP" for our internal project name -- Election Results Reporting Platform.

Comment

Comment

Election Results Reporting - Assumptions about Data

In a previous post I said that our ENRS project is basically an effort to investigate a set of assumptions about how the reporting of election results can be transformed with innovations right at the source -- in the hands of the local election officials who manage the elections that create the data. One of those assumptions is that we -- and I am talking about election technologists in a broad community, not only the TrustTheVote Project -- can make election data standards that are important in five ways:

  1. Flexible to encompass data coming from a variety of elections organizations nationwide.
  2. Structured to accommodate the raw source data from a variety of legacy and/or proprietary systems, feasibly translated or converted into a standard, common data format.
  3. Able to simply express the most basic results data: how many votes each candidate received.
  4. Able to express more than just winners and losers data, but nearly all of the relevant information that election officials currently have but don't widely publish (i.e., data on participation and performance).
  5. Flexible to express detailed breakdowns of raw data, into precinct-level data views, including all the relevant information beyond winners and losers.

Hmm. It took a bunch of words to spell that out, and for everyone but election geeks it may look daunting.  To simplify, here are three important things we're doing to prove out those assumptions to some extent.

  1. We're collecting real election results data from a single election (November, 2012) from a number of different jurisdictions across the country, together with supporting information about election jurisdictions' structure, geospatial data, registration, participation, and more.
  2. We're learning about the underlying structure of this data in its native form, by collaborating with the local elections organizations that know it best.
  3. We're normalizing the data, rendering it in a standard data format, and using software to crunch that data, in order to present it in a digestible way to regular folks who aren't "data geeks."

And all of that comprises one set of assumptions we're working on; that is, we're assuming all of these activities are feasible and can bear fruit in an exploratory project.  Steady as she goes; so far, so good.

-- EJS

Comment

Comment

Election Results Reload: the Time is Right

In my last post, I said that the time is right for breaking the logjam in election results reporting, enabling a big reload on technology for reporting, and big increase in public transparency. Now, let me explain why, starting with the biggest of several reasons.  Elections data standards are needed to define common data formats into which a variety of results data can converted.

Those standards are emerging now, and previously the lack of them was a real problem.

  • We can't reasonably expect a local elections office to take additional efforts to publish the data, or otherwise serve the public with election results services, if the result will be just one voice in a Babel of dozens of different data languages and dialects.
  • We can't reasonably expect a 3rd party organization to make use of the data from many sources, unless it's available in a single standard format, or they have the wherewithal to do huge amounts of work on data conversion, repeatedly.

The good news is that election data standards have come along way in the last couple of years, due to:

  • Significant support from a the U.S. Governments standards body -- the National Institute of Standards and Technology (NIST);
  • Sustained effort from the volunteers working in standards committees in the international standards body -- the IEEE 1622 Working Group; and
  • Practical experience with evolving de facto standards, particularly with the data formats and services of the Pew Voting Information Project (VIP), and the several elections organizations that participate in providing VIP data.

There are other reasons why the time is right, but they are more widely understood:

  • We now have technologies that perennially understaffed and underfunded elections organization can feasibly adopt quickly and cheaply including powerful web application frameworks, supported by cloud hosting operations, within a growing ecosystem of web services that enable many organizations to access a variety of data and apps.
  • "Open government," "open data," and even "big data" are buzz phrases now commonly understood, which describe a powerful and maturing set of technologies and IT practices.  This new language of government IT innovation facilitates actionable conversations about the opportunity to provide the public with far more robust information on elections and their participation and performance.

It's a "promised land" of government IT and the so-called Gov 2.0 movement (arguably we think more like Gov 3.0 when you think about it in terms of 2.0 was all about collaboration and 3.0 is becoming all about the "utility web"--real apps available on demand -- a direction some of these services will inevitably take).  However, for election technology in the near term, we first have to cross the river by learning how to "get the data out" (and that is more like Gov 2.0) More next time on our assumptions about how that river can be crossed, and our experiences to date on doing that crossing.

-- EJS

Comment

Comment

Towards Standardized Election Results Data Reporting

Now that we are a ways into our "Election Night Reporting System" project, we want to start sharing some of what we are learning.  We had talked about a dedicated Wiki or some such, but our time was better spent digging into the assignment graciously supported by the Knight Foundation Prototype Fund.  Perhaps the best place to start is a summary of what we've been saying within the ENRS team, about what we're trying to accomplish. First, we're toying with this silly internal project code name, "ENRS" and we don't expect it to hang around forever. Our biggest grip is that what we're trying to do extends way beyond the night of elections, but more about that later.

Our ENRS project is based on a few assumptions, or perhaps one could say some hypotheses that we hope to prove. "Prove" is probably a strong word. It might better to say that we expect that our assumptions will be valid, but with practical limitations that we'll discover.

The assumptions are fundamentally about three related topics:

  1. The nature and detail of election results data;
  2. The types of software and services that one could build to leverage that data for public transparency; and
  3. Perhaps most critically, the ability for data and software to interact in a standard way that could be adopted broadly.

As we go along in the project, we hope to say more about the assumptions in each of these areas.

But it is the goal of feasible broad adoption of standards that is really the most important part. There's a huge amount of latent value (in terms of transparency and accountability) to be had from aggregating and analyzing a huge amount of election results data. But most of that data is effectively locked up, at present, in thousands of little lockboxes of proprietary and/or legacy data formats.

It's not as though most local election officials -- the folks who are the source of election results data, as they conduct elections and the process of tallying ballots -- want to keep the data locked up, nor to impede others' activities in aggregating results data across counties and states, and analyzing it. Rather, most local election officials just don't have the means to "get the data out" in way that supports such activities.

We believe that the time is right to create the technology to do just that, and enable election officials to use the technology quickly and easily. And this prototype phase of ENRS is the beginning.

Lastly, we have many people to thank, starting with Chris Barr and the Knight Foundation for its grant to support this prototype project. Further, the current work is based on a previous design phase. Our thanks to our interactive design team led by DDO, and the Travis County, TX Elections Team who provided valuable input and feedback during that earlier phase of work, without which the current project wouldn't be possible.

-- EJS

Comment

Comment

A New Opening for "Open"

I'm still feeling a bit stunned by recent events: the IRS has finally put us at the starting point that we had reasonably hoped to be at about 5 years ago. Since then, election tech dysfunction hasn't gone away; U.S. election officials have less funding than ever to run elections; there are more requirements than ever for the use of technology in election-land; and there are more public expectations than ever of operational transparency of "open government" certainly including elections; and the for-profit tech sector does not offer election officials what they need. So there's more to do than we ever expected, and less time to do it in. For today, I want to re-state a focus on "open data" as the part of "open source" that's used by "open gov" to provide "big data" for public transparency. Actually I don't have anything new to say, having re-read previous posts:

It's still the same. Information wants to be free, and in election land, there is lots of it that we need to see, in order to "trust but verify" that our elections are all that we hope them to be. I'm very happy that we now have a larger scope to work in, to deliver the open tech that's needed.

-- EJS

Comment

6 Comments

For (Digital) Poll Books -- Custody Matters!

Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”).  For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote. For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability.  So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.

OSDV_pollbook_100709-1We've been hearing much lately about State and local election officials' appetite (or infatuation) for digital poll books.  We've been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book.  But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood.  And we think the gap shows why physical custody is so important—especially so for digital poll books.

The first two obvious purposes of a poll book are to [1] check in a voter as a prerequisite to obtaining a ballot, and [2] to prevent a voter from having a second go at checking-in and obtaining a ballot.  That's fine for meeting the "Eligibility" and "Non-duplication" requirements for in-person voting.

But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood.  In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting.  Bear in mind, those "delayed-cast" ballots can't be evaluated until after the post-election poll-book-intake process is complete.

To explain why, let's consider one fairly typical approach to absentee evaluation.  The poll book intake process results in an update to the voter record of every voter who voted in person.  Then, the voter record system is used as one part of absentee and provisional ballot processing.  Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record.  If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.

So far, that's a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).

The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added).  With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.

With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that's managed by the tablet, and that represents who voted) can be separated, although it should not.

Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message.  It's essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place.  Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected.  Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.

That's why digital poll books must be carefully designed for use by election staff in a way that doesn't endanger the integrity of the data.  And this is an example of the devil in the details that's so common for innovative election technology.

Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.

Marvelous simplicity, right?  Oh hec, I'm sure some grant-funded project could build that right away.  But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on.  You wouldn't put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.

A Side Note:  This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts.  They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today.  Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web.  And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices.  But there is far more to it, than a sexy interface with a 4G connection.  Oops, I digress to a tangential gripe.

This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in [1] the data integration (to and from the voter record system); [2] the data management (to and from each individual digital poll book), and [3] the data integrity (maintaining the same control present in paper poll books).

Without a doubt, the voter's user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we're gathering plenty of requirements and feedback based on our current work.  But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.

Thoughts? Your ball (er, ballot?) GAM | out

6 Comments

Comment

The 2013 Annual Elections Verification Conference Opens Tonight

If its Wednesday 13.March it must be Atlanta.  And that means the opening evening reception for the Elections Verification Network's 2013 Annual Conference.  We're high on this gathering of elections officials, experts, academicians and advocates because it represents a unique interdisciplinary collaboration of technologists, policy wonks and legal experts, and even politicians all with a common goal: trustworthy elections. The OSDV Foundation is proud to be a major sponsor of this event.  We do so because it is precisely these kinds of forums where discussions about innovation in HOW America votes take place and it represents a rich opportunity for collaboration, debate, education, and sharing.  We always learn much and share our own research and development efforts as directed by our stakeholders -- those State and local elections officials who are the beneficiaries of our charitable work to bring increased accuracy, transparency, verification, and security (i.e., the 4 pillars of trustworthiness) to elections technology reform through education, research and development for elections technology innovation.

Below are my opening remarks to be delivered this evening or tomorrow morning, at the pleasure of the Planning Committee depending on how they slot the major sponsors opportunities to address the attendees.  We believe there are 3 points we wanted to get across in opening remarks: [1] why we support the EVN; [2] why there is a growing energy around increased election verification efforts, and [3] how the EVN can drive that movement forward.....

Greetings Attendees!

On behalf of the EVN Planning Committee and the Open Source Digital Voting Foundation I want to welcome everyone to the 2013 Elections Verification Network Annual Conference.  As a major conference supporter, the Planning Committee asked if I, on behalf of the OSDV Foundation, would take 3 minutes to share 3 things with you:

  • 1st, why the Foundation decided to help underwrite this Conference;
  • 2nd, why we believe there is a growing energy and excitement around election verification; and
  • 3rd, how the EVN can bring significant value to this growing movement

So, we decided to make a major commitment to underwriting and participating in this conference for two reasons:

  1. We want to strengthen the work of this diverse group of stakeholders and do all that we can to fortify this gathering to make it the premier event of its kind; and
  2. The work of the EVN is vital to our own mission because there are 4 pillars to trustworthy elections: Accuracy, Transparency, Verification, and Security, and the goals and objectives of these four elements require enormous input from all stakeholders.  The time to raise awareness, increase visibility, and catalyze participation is now, more than ever.  Which leads to point about the movement.

We believe the new energy and excitement being felt around election verification is due primarily to 4 developments, which when viewed in the aggregate, illustrates an emerging movement.  Let’s consider them quickly:

  1. First, we’re witnessing an increasing number of elections officials considering “forklift upgrades” in their elections systems, which are driving public-government partnerships to explore and ideate on real innovation – the Travis County Star Project and the LA County’s VSAP come to mind as two showcase examples, which are, in turn, catalyzing downstream activities in smaller jurisdictions;
  2. The FOCE conference in CA, backed by the James Irvine Foundation was a public coming out of sorts to convene technologists, policy experts, and advocates in a collaborative fashion;
  3. The recent NIST Conferences have also raised the profile as a convener of all stakeholders in an interdisciplinary fashion; and finally,
  4. The President’s recent SOTU speech and the resulting Bauer-Ginsberg Commission arguably will provide the highest level of visibility to date on the topic of improving access to voting.  And this plays into EVN’s goals and objectives for elections verification.  You see, while on its face the visible driver is fair access to the ballot, the underlying aspect soon to become visible is the reliability, security, and verifiability of the processes that make fair access possible.  And that leads to my final point this morning:

The EVN can bring significant value to this increased energy, excitement, and resulting movement if we can catalyze a cross pollination of ideas and rapidly increase awareness across the country.  In fact, we spend lots of time talking amongst ourselves.  It’s time to spread the word.  This is critical because while elections are highly decentralized, there are common principles that must be woven into the fabric of every process in every jurisdiction.  That said, we think spreading the word requires 3 objectives:

  1. Maintaining intellectual honesty when discussing the complicated cocktail of technology, policy, and politics;
  2. Sustaining a balanced approach of guarded optimism with an embracing of the potential for innovation; and
  3. Encouraging a breadth of problem awareness, possible solutions, and pragmatism in their application, because one size will never fit all.

So, welcome again, and lets make the 2013 EVN Conference a change agent for raising awareness, increasing knowledge, and catalyzing a nationwide movement to adopt the agenda of elections verification.

Thanks again, and best wishes for a productive couple of days.

Comment