Elections data standards are essential to delivering real innovation. The annual Election Data Standards meeting opened today in Los Angeles, CA. We thought we'd give you an overview of just what in the hec this is about and why its essential to creating a voting experience that's easy, convenient, and dare we say delightful. Dry? Kinda. But a peek at the real in the trenches work we're doing. Yep.
Viewing entries in
The second of two blog posts exploring how the TrustTheVote Project fits in the "civic tech" landscape.
So where does the TrustTheVote Project fit in the broader “civic tech” movement that so many people in the technology world write and talk about? This is the first of two posts on this thought.
Ms. Voting Matters would really like to wave her magic wand and allow everyone on the planet to cast their votes, securely, with their smart phones, tablets, or laptops. Really truly, I would do it if I could. But I can’t. The Internet of Voting is just not safe and secure enough now, no matter how much we all would wish it so. Let me share why.
BusyBooth, an app being developed by the TrustTheVote Project, is the public-service, polling-place app voters have been waiting for.
Many of you are learning the news, and its true: our Foundation’s name is changing, but the mission remains the same. Here’s the story.
I’d like to officially introduce you all to our new name: theOpen Source Election Technology Foundation, or as we’re referring to it, the “OSET Foundation” (“Oh-Set”).
I can tell you we’ve selected WordPress as our platform for all of our web sites going forward, thanks to the generous support of Matt Mullenweg, who has generously backed the Foundation before, and is stepping up again, this time with WordPress development resources to help us publish a world class set of sites and resources for our stakeholders (elections officials), supporters, and you. We deeply appreciate Matt’s support. But I digress. Let’s get back to the naming thing.
What’s in a Name?
When we got our start back in late 2006 we chose a name, somewhat intentionally provocative, to reflect what we believed then our mission should be: addressing the pressing need for innovation in machinery used to administer an election. To us, and many we spoke with in that first year, “digital voting” meant the use of computers in the act(s) of voting. The cries to rethink DREs (“digital recording electronics”) were reaching a crescendo and we were tired of writing about their woes and decided we should form a team to rethink the machinery… but in a way to bring more transparency at least, and more accuracy, verification and security in the process. So…
“Open Source,” from our experiences in the Silicon Valley (notably the Mozilla Project, as some of us were by then Netscape alumni) was potentially the “jam cracker” to inject innovation into a stagnant industry where there is no business incentive to perform the R&D necessary to address the mandates of verification, accuracy, security and transparency. Thus we branded ourselves the “Open Source Digital Voting” or “OSDV” Foundation.
Fast forward to 2010 when, during the midst of our battle to earn our tax exempt status, we learned from our PR team that consumer research revealed a startling fact. In that first 4 years while we were learning the ins and outs of elections administration and related processes, policies, politics, and people, the iPod and iPhone had reshaped popular perception and “digital” now meant “Internet” to many consumers.
Of course, that resulted in a terrible misconception of what we’re doing because our work has nothing to do with Internet Voting — a concept given today’s Internet that is simply not viable by our measure in terms of simultaneously assuring privacy and security of ballot data.
More importantly, our work had progressed to the point that we realized the opportunity to develop an entire elections administration framework, and that to be successful, our cause needs to address the entire voting ecosystem.
So, it became clear that “OSDV” as a name had become obsolete and a new name was required. That name, a phrase that far more accurately explains what our non-profit mission is about, is the Open Source Election Technology or OSET Foundation.
Importantly, our flagship effort, the TrustTheVote Project, remains the main thing and vehicle of our mission to bring publicly owned innovation to our Nation’s critical democracy infrastructure. We have refreshed the TrustTheVote Project brand as you can see to the left here, which can also be seen by visiting either of our Twitter presences @OSET or@TrustTheTheVote. However, nothing else about the Project has or will change — save a new web site on the way this summer.
In short, we’re pleased to introduce the OSET Foundation with its on-going mission via the TrustTheVote Project to “improve confidence in elections and their outcomes.”
(My thanks to a security colleague Matt Bishop who offered this excellent rant (his term not mine!) on Heartbleed and what we can learn from it, and the connection to open source. You can read riff on it here.)
“First, the Heartbleed vulnerability isn’t a virus; you can’t be infected by it. It’s a programming error in one particular part of OpenSSL that was introduced when new functionality was added in late 2011. If the servers you connect to do not use OpenSSL, you’re safe from this. But many very widely used servers do use it, hence the concern.
The comment is that it’s a good example of the subtlety of problems that can be introduced through poor programming practices. The specific problem was an assumption that an incoming packet length as given in the packet is correct. The attack basically puts a bogus value in the length field, which enables the attacker to capture a chunk of memory that may contain sensitive data like user names and password — in the clear. The value in the length field is not something most programmers would question or try to validate.
We’ve seen similar vulnerabilities before in software designed to enhance or check security. The one that comes to mind immediately was in a widely used encryption library that had a buffer overflow, allowing anyone who used a server (or privileged program) that relied on the library to escalate privileges. The reference for the curious is:
This is why people like me are so concerned about complex code, *including* the underlying operating systems and drivers that support the election software. Note I didn’t say secret. Secret code to my mind is by definition suspect, especially in an environment in which transparency is a key requirement (for example, elections). But even open source code that is complex is suspect, because of the possibility of subtle errors. Or, as a friend of mine put it in a talk he gave in 1989, “[Company] claims it has developed a secure system. It’s 1.5 million lines of code. 1.5 million! Want to bet I can’t find a vulnerability in 1.5 million lines of code?” And systems were much smaller then . . . if I remember correctly, Microsoft Windows 2000 had roughly 33.5 million lines of code in its code base. No idea how much code the various versions of Windows have now.
And none of this covers the process (procedures) surrounding the use of these systems, which also need to be checked as a whole.
Rantings from a security person,
The TrustTheVote Project Core Team has been hard at work on the Alpha version ofVoteStream, our election results reporting technology. They recently wrapped up a prototype phase funded by the Knight Foundation, and then forged ahead a bit, to incorporate data from additional counties, provided by by participating state or local election officials after the official wrap-up.
Along the way, there have been a series of postings here that together tell a story about the VoteStream prototype project. They start with a basic description of the project in Towards Standardized Election Results Data Reporting and Election Results Reload: the Time is Right. Then there was a series of posts about the project’s assumptions about data, about software (part one and part two), and about standards and converters (part one and part two).
Of course, the information wouldn’t be complete without a description of the open-source software prototype itself, provided Not Just Election Night: VoteStream.
Actually the project was as much about data, standards, and tools, as software. On the data front, there is a general introduction to a major part of the project’s work in “data wrangling” in VoteStream: Data-Wrangling of Election Results Data. After that were more posts on data wrangling, quite deep in the data-head shed — but still important, because each one is about the work required to take real election data and real election result data from disparate counties across the country, and fit into a common data format and common online user experience. The deep data-heads can find quite a bit of detail in three postings about data wrangling, in Ramsey County MN, in Travis County TX, and in Los Angeles CountyCA.
Today, there is a VoteStream project web site with VoteStream itself and the latest set of multi-county election results, but also with some additional explanatory material, including the election results data for each of these counties. Of course, you can get that from the VoteStream API or data feed, but there may be some interest in the actual source data. For more on those developments, stay tuned!
This week the PCEA finally released its long-awaited report to the President. Its loaded with good recommendations. Over the next several days or posts we'll give you our take on some of them. For the moment, we want to call your attention to a couple of under-pinning elements now that its done.
The Resource Behind the Resources
Early in the formation of what initially was referred to as the "Bauer-Ginsberg Commission" we were asked to visit the co-chairs in Washington D.C. to chat about technology experts and resources. We have a Board member who knows them both and when asked we were honored to respond.
Early on we advised the Co-Chairs that their research would be incomplete without speaking with several election technology experts, and of course they agreed. The question was how to create a means to do so and not bog down the progress governed by layers of necessary administrative regulations.
I take a paragraph here to observe that I was very impressed in our initial meeting with Bob Bauer and Ben Ginsberg. Despite being polar political opposites they demonstrated how Washington should work: they were respectful, collegial, sought compromise to advance the common agenda and seemed to be intent on checking politics at the door in order to get work done. It was refreshing and restored my faith that somewhere in the District there remains a potential for government to actually work for the people. I digress.
We advised them that looking to the CalTech-MIT Voting Project would definitely be one resource they could benefit from having.
We offered our own organization, but with our tax exempt status still pending, it would be difficult politically and otherwise to rely on us much in a visible manner.
So the Chairs asked us if we could pull together a list -- not an official subcommittee mind you, but a list of the top "go to" minds in the elections technology domain. We agreed and began a several week process of vetting a list that needed to be winnowed down to about 20 for manageability These experts would be brought in individually as desired, or collectively -- it was to be figured out later which would be most administratively expedient. Several of our readers, supporters, and those who know us were aware of this confidential effort. The challenge was lack of time to run the entire process of public recruiting and selection. So, they asked us to help expedite that, having determined we could gather the best in short order.
And that was fine because anyone was entitled to contact the Commission, submit letters and comments and come testify or speak at the several public hearings to be held.
So we did that. And several of that group were in fact utilized. Not everyone though, and that was kind of disappointing, but a function of the timing constraints.
The next major resource we advised they had to include besides CalTech-MIT and a tech advisory group was Rock The Vote. And that was because (notwithstanding they being a technology partner of ours) Rock The Vote has its ear to the rails of new and young voters starting with their registration experience and initial opportunity to cast their ballot.
Finally we noted that there were a couple of other resources they really could not afford to over-look including the Verified Voting Foundation, and L.A. County's VSAP Project and Travis County's StarVote Project.
The outcome of all of that brings me to the meat of this post about the PCEA Report and our real contribution. Sure, we had some behind the scenes involvement as I describe above. No big deal. We hope it helped.
The Real Opportunity for Innovation
But the real opportunity to contribute came in the creation of the PCEA Web Site and its resource toolkit pages.
On that site, the PCEA took our advice and chose to utilize Rock The Vote's open source voter registration tools and specifically the foundational elements the TrustTheVote Project has built for a States' Voter Information Services Portal.
Together, Rock The Vote and the TrustTheVote Project are able to showcase the open source software that any State can adopt, adapt, and deploy--for free (at least the adoption part) and without having to reinvent the wheel by paying for a ground-up custom build of their own online voter registration and information services portal.
We submit that this resource on their PCEA web site represents an important ingredient to injecting innovation into a stagnant technology environment of today's elections and voting systems world.
For the first time, there is production-ready open source software available for an important part of an elections official's administrative responsibilities that can lower costs, accelerate deployment and catalyze innovation.
To be sure, its only a start -- its lower hanging fruit of an election technology platform that doesn't require any sort of certification. With our exempt status in place, and lots of things happening we'll soon share, there is more, much more, to come. But this is a start.
There is a 112 pages of goodness in the PCEA report. And there are some elements in there that deserve further discussion. But we humbly assert its the availability of some open source software on their resource web site that we think represents a quiet breakthrough in elections technology innovation.
The news has been considerable. So, yep, we admit it. We're oozing pride today. And we owe it to your continued support of our cause. Thank you!
GAM | out
If you've read some of the ongoing thread about our VoteStream effort, it's been a lot about data and standards. Today is more of the same, but first with a nod that the software development is going fine, as well. We've come up with a preliminary data model, gotten real results data from Ramsey County, Minnesota, and developed most of the key features in the VoteStream prototype, using the TrustTheVote Project's Election Results Reporting Platform. I'll have plenty to say about the data-wrangling as we move through several different counties' data. But today I want to focus on a key structuring principle that works both for data and for the work that real local election officials (LEOS) do, before an election, during election night, and thereafter.
Put simply, the basic structuring principle is that the election definition comes first, and the election results come later and refer to the election definition. This principle matches the work that LEOs do, using their election management system to define each contest in an upcoming election, define each candidate, and do on. The result of that work is a data set that both serves as an election definition, and also provides the context for the election by defining the jurisdiction in which the election will be held. The jurisdiction is typically a set of electoral districts (e.g. a congressional district, or a city council seat), and a county divided into precincts, each of which votes on a specific set of contests in the election.
Our shorthand term for this dataset is JEDI (jurisdiction election data interchange), which is all the data about an election that an independent system would need to know. Most current voting system products have an Election Management System (EMS) product that can produce a JEDI in a proprietary format, for use in reporting, or ballot counting devices. Several states and localities have already adopted the VIP standard for publishing a similar set of information.
We've adopted the VIP format as the standard that that we'll be using on the TrustTheVote Project. And we're developing a few modest extensions to it, that are needed to represent a full JEDI that meets the needs of VoteStream, or really any system that consumes and displays election results. All extensions are optional and backwards compatible, and we'll be submitting them as suggestions, when we think we got a full set. So far, it's pretty basic: the inclusion of geographic data that describes a precinct's boundaries; a use of existing meta-data to note whether a district is a federal, state, or local district.
So far, this is working well, and we expect to be able to construct a VIP-standard JEDI for each county in our VoteStream project, based on the extant source data that we have. The next step, which may be a bit more hairy, is a similar standard for election results with the detailed information that we want to present via VoteStream.
PS: If you want to look at a small artificial JEDI, it's right here: Arden County, a fictional county that has just 3 precincts, about a dozen districts, and Nov/2012 election. It's short enough that you can page through it and get a feel for what kinds of data are required.
Last time, I explained how our VoteStream work depends on the 3rd of 3 assumptions: loosely, that there might be a good way to get election results data (and other related data) out of their current hiding places, and into some useful software, connected by an election data standard that encompasses results data. But what are we actually doing about it? Answer: we are building prototypes of that connection, and the lynchpin is an election data standard that can express everything about the information that VoteStream needs. We've found that the VIP format is an existing, widely adopted standard that provides a good starting point. More details on that later, but for now the key words are "converters" and "connectors". We're developing technology that proves the concept that anyone with basic data modeling and software development skills can create a connector, or data converter, that transforms election data (including but most certainly not limited to vote counts) from one of a variety of existing formats, to the format of the election data standard.
And this is the central concept to prove -- because as we've been saying in various ways for some time, the data exists but is locked up in a variety of legacy and/or proprietary formats. These existing formats differ from one another quite a bit, and contain varying amounts of information beyond basic vote counts. There is good reason to be skeptical, to suppose that is a hard problem to take these different shapes and sizes of square data pegs (and pentagonal, octahedral, and many other shaped pegs!) and put them in a single round hole.
But what we're learning -- and the jury is still out, promising as our experience is so far -- that all these existing data sets have basically similar elements, that correspond to a single standard, and that it's not hard to develop prototype software that uses those correspondence to convert to a single format. We'll get a better understanding of the tricky bits, as we go along making 3 or 4 prototype converters.
Much of this feasibility rests on a structuring principle that we've adopted, which runs parallel to the existing data standard that we've adopted. Much more on that principle, the standard, its evolution, and so on … yet to come. As we get more experience with data-wrangling and converter-creation, there will certainly be a lot more to say.
It's time to finish -- in two parts -- the long-ish explanation of the assumptions behind our current "VoteStream" prototype stage of the TrustTheVote Project's Election Result Reporting Platform (ENRS) project. As I said before, it is an exercise in validating some key assumptions, and discovering their limits. Previously, I've described our assumptions about election results data, and the software that can present it. Today, I'll explain the 3rd of three basic assumptions, which in a nutshell is this:
- If the data has the characteristics that we assumed, and
- if the software (to present that data) is as feasible and useful as we assumed;
- then there is a method for getting the data from its source to the reporting software, and
- that method is practical for real-world elections organization, scalable, and feasible to be adopted widely.
So, where are we today? Well, as previous postings have described, we made a good start on validating the first 2 assumptions during the previous design phase. And since starting this prototype phase, we've improved the designs and put them into action. So far so good: the data is richer than we assumed; the software is actually significantly more flexible than before, and effectively presents the data. We're pretty confident that our assumptions were valid on those two points.
But where did the 2012 election results data come from, and how did it get into the ENRS prototype? Invented elections, or small transcribed subsets of real results, were fine for design; but in this phase it needs to be real data, complete data, from real election officials, used in a regular and repeated way. That's the kind of connection between data source and ENRS software that we've been assuming.
Having stated this third of three assumptions, the next point is about what we're doing to prove that assumption, and assess it limits. That will be part two of two, of this last segment of my account of our assumptions and progress to date.
This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.
We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems. These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.
Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project
We appreciate an opportunity to participate in this important discussion. We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.
For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point. Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model. This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.
We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system. There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.
We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California. Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.
2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component. Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner. With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."
The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem. And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote. They deserve congratulations for reaching this imperative state of data standards détente. It's not finished, but the effort and momentum is there.
So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.
3. Commercial Revitalization
Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry. Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation. Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.
One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery. To that point, let us offer an important clarification and dispel a misconception about rolling your own. This does not mean that elections officials are about to be left to self-vend. And by that we mean self-construct and support their open, standard, commodity voting system components. A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type. Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software. Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.
Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation. To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.
In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.
EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California. We don't read SB-360 that way at all. We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest. In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed. We simply need to modernize how certification works to enable this kind of innovation. We think our comments today bear that out.
Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360. In sum and substance the agenda is to discuss:
- Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
- Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
- Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.
Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions. We think this allegation is also misinformed for two reasons:  assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and  notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned. From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away. And indeed, this is what we've witnessed over the years in other jurisdictions.
A rose by any other name would smell as sweet, but if you want people to understand what a software package does, it needs a good name. In our Election Night Reporting System project, we've learned that it's not just about election night, and it's not just about reporting either. Even before election night, a system can convey a great deal of information about an upcoming election and the places and people that will be voting in it. To take a simple example: we've learned that in some jurisdictions, a wealth of voter registration information is available and ready to be reported alongside election results data that will start streaming in on election night from precincts and counties all over.
It's not a "system" either. The technology that we've been building can be used to build a variety of useful systems. It's better perhaps to think of it as a platform for "Election Result Reporting" systems of various kinds. Perhaps the simplest and most useful system to build on this platform is a system that election officials can load with data in a standard format, and which then publishes the aggregated data as an "election results and participation data feed". No web pages, no API, just a data feed, like the widely used (in election land) data feed technique using the Voting Information Project and their data format.
In fact, one of the recent lessons learned, is that the VIP data standard is a really good candidate for an election data standard as well, including:
- election definitions (it is that already),
- election results that reference an election definition (needs a little work to get there), and
- election participation data (a modest extension to election results).
As a result (no pun intended) we're starting work on defining requirements for how to use VIP format in our prototype of the "Election Results Reporting Platform" (ERRP).
But the prototype needs to be a lot more than the ERRP software packaged in to a data feed. It needs to also provide a web services API to the data, and it needs to have a web user interface for ordinary people to use. So we've decided to give the prototype a better name, which for now is "VoteStream".
Our VoteStream prototype shows how ERRP technology can be packaged to create a system that's operated by local election officials (LEOs) to publish election results -- including but not limited to publishing unofficial results data on election night, as the precincts report in. Then, later, the LEOs can expand the data beyond vote counts that say who won or lost. That timely access on election night is important, but just as important is the additional information that can be added during the work in which the total story on election results is put together -- and even more added data after the completion of that "canvass" process.
That's VoteStream. Some other simpler ERRP-based system might be different: perhaps VoteFeed, operated by a state elections organization to collate LEO's data and publish to data hounds, but not to the general public and their browsers. Who knows? We don't, not yet anyhow. We're building the platform (ERRP), and building a prototype (VoteStream) of an LEO-oriented system on the platform.
The obvious next question is: what is all that additional data beyond the winner/loser numbers on election night? We're still learning the answers to that question, and will share more as we go along.
Today, I'll be concluding my description of one area of assumptions in our Election Night Reporting System project -- our assumptions about software. In my last post, I said that our assumptions about software were based on two things: our assumptions about election results data (which I described previously), and the results of the previous, design-centric phase of our ENRS work. Those results consist of two seemingly disparate parts:
- the UX design itself, that enables people to ask ENRS questions, and
- a web service interface definition, that enable to software to ask ENRS questions.
In case (1), the answer is web pages delivered by a web app. In case (2) the answers are data delivered via an application programming interface (API).
Exhibit A is our ENRS design website http://design.enrs.trustthevote.org which shows a preliminary UX design for a map-based visualization and navigation of the election results data for the November 2010 election in Travis County, Texas. The basic idea was to present a modest but useful variety way to slice and dice the data, that would be meaningful to ordinary voters and observers of elections. The options include slicing the data at the county level, or the individual precinct level, or in-between, and to filter by one of various different kinds of election results or contests or referenda. Though preliminary, the UX design well received, and it's the basis for current work to do a more complete UX that also provides features for power users (data-heads) without impacting the view of ordinary observers.
Exhibit B is the application programming interface (API), or for now just one example of it:
That does not look like a very exciting web page (click it now if you don't believe me!), and a full answer of "what's an API" can wait for another day.
But the point here is that the URL is a way for software to request a very specific slice through a large set of data, and get it in a software-centric digestable way. The URL (which you can see above in the address bar) is the question, and the answer is what you above as the page view. Now, imagine something like your favorite NBA or NFL scoreboard app for your phone, periodically getting updates on how your favorite candidate is doing, and alerting you in a similar way that you get alerts about your favorite sports team. That app asks questions of ENRS, and gets answers, in exactly the way you see above, but of course it is all "under the hood" of the app's user interface.
So, finally, we can re-state the software assumption of our ENRS project:
- if one can get sufficiently rich election data, unlocked from the source, in a standard format,
- then one can feasibly develop a lightweight modern cloud-oriented web app, including a web service, that enables election officials to both:
- help ordinary people understand complex election results data, and
- help independent software navigate that data, and present it to the public in many ways, far beyond the responsibilities of election officials.
We're trying to prove that assumption, by developing the software -- in our usual open source methodology of course -- in a way that (we hope) provides a model for any tech organization to similarly leverage the same data formats and APIs.
Today I'm continuing with the second of a 3-part series about what we at the TrustTheVote Project are hoping to prove in our Election Night Reporting System project. As I wrote earlier, we have assumptions in three areas, one of which is software. I'll try to put into a nutshell a question that we're working on an answer to:
If you were able to get the raw election results data available in a wonderful format, what types of useful Apps and services could you develop?
OK, that was not exactly the shortest question, and in order to understand what "wonderful format" means, you'd have to read my previous post on Assumptions About Data. But instead, maybe you'd like to take a minute to look at some of the work from our previous phase of ENRS work, where we focused on two seemingly unrelated aspects of ENRS technology:
- The user experience (UX) of a Web application that local election officials could provide to help ordinary folks visualize and navigate complex election results information.
- A web services API that would enable other folk's systems (not elections officials) to receive and use the data in a manner that's sufficiently flexible for a variety other services ranging from professional data mining to handy mobile apps.
They're related because the end results embodied a set of assumptions about available data.
Now we're seeing that this type of data is available, and we're trying to prove with software prototyping that many people (not just elections organizations, and not just the TrustTheVote Project) could do cool things with that data.
There's a bit more to say -- or rather, to show and tell -- that should fit in one post, so I'll conclude next time.
PS: Oh there is one more small thing: we've had a bit of an "Ah-ha" here in the Core Team, prodded by our peeps on the Project Outreach team. This data and the apps and services that can leverage that data for all kinds of purposes has use far beyond the night of an election. And we mentioned that once before, but the ah-ha is that what we're working on is not just about election night results... its about all kinds of election results reporting, any time, any where. And that means ENRS is really not that good of a code name or acronym. Watch as "ENRS" morphs into "E2RP" for our internal project name -- Election Results Reporting Platform.
In a previous post I said that our ENRS project is basically an effort to investigate a set of assumptions about how the reporting of election results can be transformed with innovations right at the source -- in the hands of the local election officials who manage the elections that create the data. One of those assumptions is that we -- and I am talking about election technologists in a broad community, not only the TrustTheVote Project -- can make election data standards that are important in five ways:
- Flexible to encompass data coming from a variety of elections organizations nationwide.
- Structured to accommodate the raw source data from a variety of legacy and/or proprietary systems, feasibly translated or converted into a standard, common data format.
- Able to simply express the most basic results data: how many votes each candidate received.
- Able to express more than just winners and losers data, but nearly all of the relevant information that election officials currently have but don't widely publish (i.e., data on participation and performance).
- Flexible to express detailed breakdowns of raw data, into precinct-level data views, including all the relevant information beyond winners and losers.
Hmm. It took a bunch of words to spell that out, and for everyone but election geeks it may look daunting. To simplify, here are three important things we're doing to prove out those assumptions to some extent.
- We're collecting real election results data from a single election (November, 2012) from a number of different jurisdictions across the country, together with supporting information about election jurisdictions' structure, geospatial data, registration, participation, and more.
- We're learning about the underlying structure of this data in its native form, by collaborating with the local elections organizations that know it best.
- We're normalizing the data, rendering it in a standard data format, and using software to crunch that data, in order to present it in a digestible way to regular folks who aren't "data geeks."
And all of that comprises one set of assumptions we're working on; that is, we're assuming all of these activities are feasible and can bear fruit in an exploratory project. Steady as she goes; so far, so good.
In my last post, I said that the time is right for breaking the logjam in election results reporting, enabling a big reload on technology for reporting, and big increase in public transparency. Now, let me explain why, starting with the biggest of several reasons. Elections data standards are needed to define common data formats into which a variety of results data can converted.
Those standards are emerging now, and previously the lack of them was a real problem.
- We can't reasonably expect a local elections office to take additional efforts to publish the data, or otherwise serve the public with election results services, if the result will be just one voice in a Babel of dozens of different data languages and dialects.
- We can't reasonably expect a 3rd party organization to make use of the data from many sources, unless it's available in a single standard format, or they have the wherewithal to do huge amounts of work on data conversion, repeatedly.
The good news is that election data standards have come along way in the last couple of years, due to:
- Significant support from a the U.S. Governments standards body -- the National Institute of Standards and Technology (NIST);
- Sustained effort from the volunteers working in standards committees in the international standards body -- the IEEE 1622 Working Group; and
- Practical experience with evolving de facto standards, particularly with the data formats and services of the Pew Voting Information Project (VIP), and the several elections organizations that participate in providing VIP data.
There are other reasons why the time is right, but they are more widely understood:
- We now have technologies that perennially understaffed and underfunded elections organization can feasibly adopt quickly and cheaply including powerful web application frameworks, supported by cloud hosting operations, within a growing ecosystem of web services that enable many organizations to access a variety of data and apps.
- "Open government," "open data," and even "big data" are buzz phrases now commonly understood, which describe a powerful and maturing set of technologies and IT practices. This new language of government IT innovation facilitates actionable conversations about the opportunity to provide the public with far more robust information on elections and their participation and performance.
It's a "promised land" of government IT and the so-called Gov 2.0 movement (arguably we think more like Gov 3.0 when you think about it in terms of 2.0 was all about collaboration and 3.0 is becoming all about the "utility web"--real apps available on demand -- a direction some of these services will inevitably take). However, for election technology in the near term, we first have to cross the river by learning how to "get the data out" (and that is more like Gov 2.0) More next time on our assumptions about how that river can be crossed, and our experiences to date on doing that crossing.
Now that we are a ways into our "Election Night Reporting System" project, we want to start sharing some of what we are learning. We had talked about a dedicated Wiki or some such, but our time was better spent digging into the assignment graciously supported by the Knight Foundation Prototype Fund. Perhaps the best place to start is a summary of what we've been saying within the ENRS team, about what we're trying to accomplish. First, we're toying with this silly internal project code name, "ENRS" and we don't expect it to hang around forever. Our biggest grip is that what we're trying to do extends way beyond the night of elections, but more about that later.
Our ENRS project is based on a few assumptions, or perhaps one could say some hypotheses that we hope to prove. "Prove" is probably a strong word. It might better to say that we expect that our assumptions will be valid, but with practical limitations that we'll discover.
The assumptions are fundamentally about three related topics:
- The nature and detail of election results data;
- The types of software and services that one could build to leverage that data for public transparency; and
- Perhaps most critically, the ability for data and software to interact in a standard way that could be adopted broadly.
As we go along in the project, we hope to say more about the assumptions in each of these areas.
But it is the goal of feasible broad adoption of standards that is really the most important part. There's a huge amount of latent value (in terms of transparency and accountability) to be had from aggregating and analyzing a huge amount of election results data. But most of that data is effectively locked up, at present, in thousands of little lockboxes of proprietary and/or legacy data formats.
It's not as though most local election officials -- the folks who are the source of election results data, as they conduct elections and the process of tallying ballots -- want to keep the data locked up, nor to impede others' activities in aggregating results data across counties and states, and analyzing it. Rather, most local election officials just don't have the means to "get the data out" in way that supports such activities.
We believe that the time is right to create the technology to do just that, and enable election officials to use the technology quickly and easily. And this prototype phase of ENRS is the beginning.
Lastly, we have many people to thank, starting with Chris Barr and the Knight Foundation for its grant to support this prototype project. Further, the current work is based on a previous design phase. Our thanks to our interactive design team led by DDO, and the Travis County, TX Elections Team who provided valuable input and feedback during that earlier phase of work, without which the current project wouldn't be possible.