The second of two blog posts exploring how the TrustTheVote Project fits in the "civic tech" landscape.
Viewing entries in
Election Adminstration Technology
To our elections official stakeholders, Chief Technology Officer John Sebes covers a point that seems to be popping up in discussions more and more. There seems to be some confusion about what "open source" means in the context of software used for election administration or voting. That's understandable, because some election I.T. folks, and some current vendors, may not be familiar with the prior usage of the term "open source" -- especially since it is now used in so many different ways to describe (variously), people, code, legal agreements, etc. So, John hopes to get our Stakeholders back to basics on this.
So where does the TrustTheVote Project fit in the broader “civic tech” movement that so many people in the technology world write and talk about? This is the first of two posts on this thought.
On National Voter Registration Day, we note that The TrustTheVote Project is behind an open source effort to innovate online voter registration tools for States and public registration services. Here's the back story.
David Plouffe, President Obama’s top political and campaign strategist and the mastermind behind the winning 2008 and 2012 campaigns, wrote a forward-looking op-ed [paywall] in the Wall Street Journal recently about the politics of the future and how they might look.
He touched on how technology will continue to change the way campaigns are conducted – more use of mobile devices, even holograms, and more micro-targeting at individuals. But he also mentioned how people might cast their votes in the future, and that is what caught our eye here at the TrustTheVote Project. There is a considerable chasm to cross between vision and reality.
This week the PCEA finally released its long-awaited report to the President. Its loaded with good recommendations. Over the next several days or posts we'll give you our take on some of them. For the moment, we want to call your attention to a couple of under-pinning elements now that its done.
The Resource Behind the Resources
Early in the formation of what initially was referred to as the "Bauer-Ginsberg Commission" we were asked to visit the co-chairs in Washington D.C. to chat about technology experts and resources. We have a Board member who knows them both and when asked we were honored to respond.
Early on we advised the Co-Chairs that their research would be incomplete without speaking with several election technology experts, and of course they agreed. The question was how to create a means to do so and not bog down the progress governed by layers of necessary administrative regulations.
I take a paragraph here to observe that I was very impressed in our initial meeting with Bob Bauer and Ben Ginsberg. Despite being polar political opposites they demonstrated how Washington should work: they were respectful, collegial, sought compromise to advance the common agenda and seemed to be intent on checking politics at the door in order to get work done. It was refreshing and restored my faith that somewhere in the District there remains a potential for government to actually work for the people. I digress.
We advised them that looking to the CalTech-MIT Voting Project would definitely be one resource they could benefit from having.
We offered our own organization, but with our tax exempt status still pending, it would be difficult politically and otherwise to rely on us much in a visible manner.
So the Chairs asked us if we could pull together a list -- not an official subcommittee mind you, but a list of the top "go to" minds in the elections technology domain. We agreed and began a several week process of vetting a list that needed to be winnowed down to about 20 for manageability These experts would be brought in individually as desired, or collectively -- it was to be figured out later which would be most administratively expedient. Several of our readers, supporters, and those who know us were aware of this confidential effort. The challenge was lack of time to run the entire process of public recruiting and selection. So, they asked us to help expedite that, having determined we could gather the best in short order.
And that was fine because anyone was entitled to contact the Commission, submit letters and comments and come testify or speak at the several public hearings to be held.
So we did that. And several of that group were in fact utilized. Not everyone though, and that was kind of disappointing, but a function of the timing constraints.
The next major resource we advised they had to include besides CalTech-MIT and a tech advisory group was Rock The Vote. And that was because (notwithstanding they being a technology partner of ours) Rock The Vote has its ear to the rails of new and young voters starting with their registration experience and initial opportunity to cast their ballot.
Finally we noted that there were a couple of other resources they really could not afford to over-look including the Verified Voting Foundation, and L.A. County's VSAP Project and Travis County's StarVote Project.
The outcome of all of that brings me to the meat of this post about the PCEA Report and our real contribution. Sure, we had some behind the scenes involvement as I describe above. No big deal. We hope it helped.
The Real Opportunity for Innovation
But the real opportunity to contribute came in the creation of the PCEA Web Site and its resource toolkit pages.
On that site, the PCEA took our advice and chose to utilize Rock The Vote's open source voter registration tools and specifically the foundational elements the TrustTheVote Project has built for a States' Voter Information Services Portal.
Together, Rock The Vote and the TrustTheVote Project are able to showcase the open source software that any State can adopt, adapt, and deploy--for free (at least the adoption part) and without having to reinvent the wheel by paying for a ground-up custom build of their own online voter registration and information services portal.
We submit that this resource on their PCEA web site represents an important ingredient to injecting innovation into a stagnant technology environment of today's elections and voting systems world.
For the first time, there is production-ready open source software available for an important part of an elections official's administrative responsibilities that can lower costs, accelerate deployment and catalyze innovation.
To be sure, its only a start -- its lower hanging fruit of an election technology platform that doesn't require any sort of certification. With our exempt status in place, and lots of things happening we'll soon share, there is more, much more, to come. But this is a start.
There is a 112 pages of goodness in the PCEA report. And there are some elements in there that deserve further discussion. But we humbly assert its the availability of some open source software on their resource web site that we think represents a quiet breakthrough in elections technology innovation.
The news has been considerable. So, yep, we admit it. We're oozing pride today. And we owe it to your continued support of our cause. Thank you!
GAM | out
If you've read some of the ongoing thread about our VoteStream effort, it's been a lot about data and standards. Today is more of the same, but first with a nod that the software development is going fine, as well. We've come up with a preliminary data model, gotten real results data from Ramsey County, Minnesota, and developed most of the key features in the VoteStream prototype, using the TrustTheVote Project's Election Results Reporting Platform. I'll have plenty to say about the data-wrangling as we move through several different counties' data. But today I want to focus on a key structuring principle that works both for data and for the work that real local election officials (LEOS) do, before an election, during election night, and thereafter.
Put simply, the basic structuring principle is that the election definition comes first, and the election results come later and refer to the election definition. This principle matches the work that LEOs do, using their election management system to define each contest in an upcoming election, define each candidate, and do on. The result of that work is a data set that both serves as an election definition, and also provides the context for the election by defining the jurisdiction in which the election will be held. The jurisdiction is typically a set of electoral districts (e.g. a congressional district, or a city council seat), and a county divided into precincts, each of which votes on a specific set of contests in the election.
Our shorthand term for this dataset is JEDI (jurisdiction election data interchange), which is all the data about an election that an independent system would need to know. Most current voting system products have an Election Management System (EMS) product that can produce a JEDI in a proprietary format, for use in reporting, or ballot counting devices. Several states and localities have already adopted the VIP standard for publishing a similar set of information.
We've adopted the VIP format as the standard that that we'll be using on the TrustTheVote Project. And we're developing a few modest extensions to it, that are needed to represent a full JEDI that meets the needs of VoteStream, or really any system that consumes and displays election results. All extensions are optional and backwards compatible, and we'll be submitting them as suggestions, when we think we got a full set. So far, it's pretty basic: the inclusion of geographic data that describes a precinct's boundaries; a use of existing meta-data to note whether a district is a federal, state, or local district.
So far, this is working well, and we expect to be able to construct a VIP-standard JEDI for each county in our VoteStream project, based on the extant source data that we have. The next step, which may be a bit more hairy, is a similar standard for election results with the detailed information that we want to present via VoteStream.
PS: If you want to look at a small artificial JEDI, it's right here: Arden County, a fictional county that has just 3 precincts, about a dozen districts, and Nov/2012 election. It's short enough that you can page through it and get a feel for what kinds of data are required.
Last time, I explained how our VoteStream work depends on the 3rd of 3 assumptions: loosely, that there might be a good way to get election results data (and other related data) out of their current hiding places, and into some useful software, connected by an election data standard that encompasses results data. But what are we actually doing about it? Answer: we are building prototypes of that connection, and the lynchpin is an election data standard that can express everything about the information that VoteStream needs. We've found that the VIP format is an existing, widely adopted standard that provides a good starting point. More details on that later, but for now the key words are "converters" and "connectors". We're developing technology that proves the concept that anyone with basic data modeling and software development skills can create a connector, or data converter, that transforms election data (including but most certainly not limited to vote counts) from one of a variety of existing formats, to the format of the election data standard.
And this is the central concept to prove -- because as we've been saying in various ways for some time, the data exists but is locked up in a variety of legacy and/or proprietary formats. These existing formats differ from one another quite a bit, and contain varying amounts of information beyond basic vote counts. There is good reason to be skeptical, to suppose that is a hard problem to take these different shapes and sizes of square data pegs (and pentagonal, octahedral, and many other shaped pegs!) and put them in a single round hole.
But what we're learning -- and the jury is still out, promising as our experience is so far -- that all these existing data sets have basically similar elements, that correspond to a single standard, and that it's not hard to develop prototype software that uses those correspondence to convert to a single format. We'll get a better understanding of the tricky bits, as we go along making 3 or 4 prototype converters.
Much of this feasibility rests on a structuring principle that we've adopted, which runs parallel to the existing data standard that we've adopted. Much more on that principle, the standard, its evolution, and so on … yet to come. As we get more experience with data-wrangling and converter-creation, there will certainly be a lot more to say.
It's time to finish -- in two parts -- the long-ish explanation of the assumptions behind our current "VoteStream" prototype stage of the TrustTheVote Project's Election Result Reporting Platform (ENRS) project. As I said before, it is an exercise in validating some key assumptions, and discovering their limits. Previously, I've described our assumptions about election results data, and the software that can present it. Today, I'll explain the 3rd of three basic assumptions, which in a nutshell is this:
- If the data has the characteristics that we assumed, and
- if the software (to present that data) is as feasible and useful as we assumed;
- then there is a method for getting the data from its source to the reporting software, and
- that method is practical for real-world elections organization, scalable, and feasible to be adopted widely.
So, where are we today? Well, as previous postings have described, we made a good start on validating the first 2 assumptions during the previous design phase. And since starting this prototype phase, we've improved the designs and put them into action. So far so good: the data is richer than we assumed; the software is actually significantly more flexible than before, and effectively presents the data. We're pretty confident that our assumptions were valid on those two points.
But where did the 2012 election results data come from, and how did it get into the ENRS prototype? Invented elections, or small transcribed subsets of real results, were fine for design; but in this phase it needs to be real data, complete data, from real election officials, used in a regular and repeated way. That's the kind of connection between data source and ENRS software that we've been assuming.
Having stated this third of three assumptions, the next point is about what we're doing to prove that assumption, and assess it limits. That will be part two of two, of this last segment of my account of our assumptions and progress to date.
A rose by any other name would smell as sweet, but if you want people to understand what a software package does, it needs a good name. In our Election Night Reporting System project, we've learned that it's not just about election night, and it's not just about reporting either. Even before election night, a system can convey a great deal of information about an upcoming election and the places and people that will be voting in it. To take a simple example: we've learned that in some jurisdictions, a wealth of voter registration information is available and ready to be reported alongside election results data that will start streaming in on election night from precincts and counties all over.
It's not a "system" either. The technology that we've been building can be used to build a variety of useful systems. It's better perhaps to think of it as a platform for "Election Result Reporting" systems of various kinds. Perhaps the simplest and most useful system to build on this platform is a system that election officials can load with data in a standard format, and which then publishes the aggregated data as an "election results and participation data feed". No web pages, no API, just a data feed, like the widely used (in election land) data feed technique using the Voting Information Project and their data format.
In fact, one of the recent lessons learned, is that the VIP data standard is a really good candidate for an election data standard as well, including:
- election definitions (it is that already),
- election results that reference an election definition (needs a little work to get there), and
- election participation data (a modest extension to election results).
As a result (no pun intended) we're starting work on defining requirements for how to use VIP format in our prototype of the "Election Results Reporting Platform" (ERRP).
But the prototype needs to be a lot more than the ERRP software packaged in to a data feed. It needs to also provide a web services API to the data, and it needs to have a web user interface for ordinary people to use. So we've decided to give the prototype a better name, which for now is "VoteStream".
Our VoteStream prototype shows how ERRP technology can be packaged to create a system that's operated by local election officials (LEOs) to publish election results -- including but not limited to publishing unofficial results data on election night, as the precincts report in. Then, later, the LEOs can expand the data beyond vote counts that say who won or lost. That timely access on election night is important, but just as important is the additional information that can be added during the work in which the total story on election results is put together -- and even more added data after the completion of that "canvass" process.
That's VoteStream. Some other simpler ERRP-based system might be different: perhaps VoteFeed, operated by a state elections organization to collate LEO's data and publish to data hounds, but not to the general public and their browsers. Who knows? We don't, not yet anyhow. We're building the platform (ERRP), and building a prototype (VoteStream) of an LEO-oriented system on the platform.
The obvious next question is: what is all that additional data beyond the winner/loser numbers on election night? We're still learning the answers to that question, and will share more as we go along.
Today, I'll be concluding my description of one area of assumptions in our Election Night Reporting System project -- our assumptions about software. In my last post, I said that our assumptions about software were based on two things: our assumptions about election results data (which I described previously), and the results of the previous, design-centric phase of our ENRS work. Those results consist of two seemingly disparate parts:
- the UX design itself, that enables people to ask ENRS questions, and
- a web service interface definition, that enable to software to ask ENRS questions.
In case (1), the answer is web pages delivered by a web app. In case (2) the answers are data delivered via an application programming interface (API).
Exhibit A is our ENRS design website http://design.enrs.trustthevote.org which shows a preliminary UX design for a map-based visualization and navigation of the election results data for the November 2010 election in Travis County, Texas. The basic idea was to present a modest but useful variety way to slice and dice the data, that would be meaningful to ordinary voters and observers of elections. The options include slicing the data at the county level, or the individual precinct level, or in-between, and to filter by one of various different kinds of election results or contests or referenda. Though preliminary, the UX design well received, and it's the basis for current work to do a more complete UX that also provides features for power users (data-heads) without impacting the view of ordinary observers.
Exhibit B is the application programming interface (API), or for now just one example of it:
That does not look like a very exciting web page (click it now if you don't believe me!), and a full answer of "what's an API" can wait for another day.
But the point here is that the URL is a way for software to request a very specific slice through a large set of data, and get it in a software-centric digestable way. The URL (which you can see above in the address bar) is the question, and the answer is what you above as the page view. Now, imagine something like your favorite NBA or NFL scoreboard app for your phone, periodically getting updates on how your favorite candidate is doing, and alerting you in a similar way that you get alerts about your favorite sports team. That app asks questions of ENRS, and gets answers, in exactly the way you see above, but of course it is all "under the hood" of the app's user interface.
So, finally, we can re-state the software assumption of our ENRS project:
- if one can get sufficiently rich election data, unlocked from the source, in a standard format,
- then one can feasibly develop a lightweight modern cloud-oriented web app, including a web service, that enables election officials to both:
- help ordinary people understand complex election results data, and
- help independent software navigate that data, and present it to the public in many ways, far beyond the responsibilities of election officials.
We're trying to prove that assumption, by developing the software -- in our usual open source methodology of course -- in a way that (we hope) provides a model for any tech organization to similarly leverage the same data formats and APIs.
Today I'm continuing with the second of a 3-part series about what we at the TrustTheVote Project are hoping to prove in our Election Night Reporting System project. As I wrote earlier, we have assumptions in three areas, one of which is software. I'll try to put into a nutshell a question that we're working on an answer to:
If you were able to get the raw election results data available in a wonderful format, what types of useful Apps and services could you develop?
OK, that was not exactly the shortest question, and in order to understand what "wonderful format" means, you'd have to read my previous post on Assumptions About Data. But instead, maybe you'd like to take a minute to look at some of the work from our previous phase of ENRS work, where we focused on two seemingly unrelated aspects of ENRS technology:
- The user experience (UX) of a Web application that local election officials could provide to help ordinary folks visualize and navigate complex election results information.
- A web services API that would enable other folk's systems (not elections officials) to receive and use the data in a manner that's sufficiently flexible for a variety other services ranging from professional data mining to handy mobile apps.
They're related because the end results embodied a set of assumptions about available data.
Now we're seeing that this type of data is available, and we're trying to prove with software prototyping that many people (not just elections organizations, and not just the TrustTheVote Project) could do cool things with that data.
There's a bit more to say -- or rather, to show and tell -- that should fit in one post, so I'll conclude next time.
PS: Oh there is one more small thing: we've had a bit of an "Ah-ha" here in the Core Team, prodded by our peeps on the Project Outreach team. This data and the apps and services that can leverage that data for all kinds of purposes has use far beyond the night of an election. And we mentioned that once before, but the ah-ha is that what we're working on is not just about election night results... its about all kinds of election results reporting, any time, any where. And that means ENRS is really not that good of a code name or acronym. Watch as "ENRS" morphs into "E2RP" for our internal project name -- Election Results Reporting Platform.
In a previous post I said that our ENRS project is basically an effort to investigate a set of assumptions about how the reporting of election results can be transformed with innovations right at the source -- in the hands of the local election officials who manage the elections that create the data. One of those assumptions is that we -- and I am talking about election technologists in a broad community, not only the TrustTheVote Project -- can make election data standards that are important in five ways:
- Flexible to encompass data coming from a variety of elections organizations nationwide.
- Structured to accommodate the raw source data from a variety of legacy and/or proprietary systems, feasibly translated or converted into a standard, common data format.
- Able to simply express the most basic results data: how many votes each candidate received.
- Able to express more than just winners and losers data, but nearly all of the relevant information that election officials currently have but don't widely publish (i.e., data on participation and performance).
- Flexible to express detailed breakdowns of raw data, into precinct-level data views, including all the relevant information beyond winners and losers.
Hmm. It took a bunch of words to spell that out, and for everyone but election geeks it may look daunting. To simplify, here are three important things we're doing to prove out those assumptions to some extent.
- We're collecting real election results data from a single election (November, 2012) from a number of different jurisdictions across the country, together with supporting information about election jurisdictions' structure, geospatial data, registration, participation, and more.
- We're learning about the underlying structure of this data in its native form, by collaborating with the local elections organizations that know it best.
- We're normalizing the data, rendering it in a standard data format, and using software to crunch that data, in order to present it in a digestible way to regular folks who aren't "data geeks."
And all of that comprises one set of assumptions we're working on; that is, we're assuming all of these activities are feasible and can bear fruit in an exploratory project. Steady as she goes; so far, so good.
In my last post, I said that the time is right for breaking the logjam in election results reporting, enabling a big reload on technology for reporting, and big increase in public transparency. Now, let me explain why, starting with the biggest of several reasons. Elections data standards are needed to define common data formats into which a variety of results data can converted.
Those standards are emerging now, and previously the lack of them was a real problem.
- We can't reasonably expect a local elections office to take additional efforts to publish the data, or otherwise serve the public with election results services, if the result will be just one voice in a Babel of dozens of different data languages and dialects.
- We can't reasonably expect a 3rd party organization to make use of the data from many sources, unless it's available in a single standard format, or they have the wherewithal to do huge amounts of work on data conversion, repeatedly.
The good news is that election data standards have come along way in the last couple of years, due to:
- Significant support from a the U.S. Governments standards body -- the National Institute of Standards and Technology (NIST);
- Sustained effort from the volunteers working in standards committees in the international standards body -- the IEEE 1622 Working Group; and
- Practical experience with evolving de facto standards, particularly with the data formats and services of the Pew Voting Information Project (VIP), and the several elections organizations that participate in providing VIP data.
There are other reasons why the time is right, but they are more widely understood:
- We now have technologies that perennially understaffed and underfunded elections organization can feasibly adopt quickly and cheaply including powerful web application frameworks, supported by cloud hosting operations, within a growing ecosystem of web services that enable many organizations to access a variety of data and apps.
- "Open government," "open data," and even "big data" are buzz phrases now commonly understood, which describe a powerful and maturing set of technologies and IT practices. This new language of government IT innovation facilitates actionable conversations about the opportunity to provide the public with far more robust information on elections and their participation and performance.
It's a "promised land" of government IT and the so-called Gov 2.0 movement (arguably we think more like Gov 3.0 when you think about it in terms of 2.0 was all about collaboration and 3.0 is becoming all about the "utility web"--real apps available on demand -- a direction some of these services will inevitably take). However, for election technology in the near term, we first have to cross the river by learning how to "get the data out" (and that is more like Gov 2.0) More next time on our assumptions about how that river can be crossed, and our experiences to date on doing that crossing.
Long lines at the polling place are becoming a thorn in our democracy. We realized a few months ago that our elections technology framework data layer could provide information that when combined with community-based information gathering might lessen the discomfort of that thorn. Actually, that realization happened while hearing friends extol the virtues of Waze. Simply enough, the idea was crowd-sourcing wait information to at least gain some insight on how busy a polling place might be at the time one wants to go cast their ballot.
Well, to be sure, lots of people are noodling around lots of good ideas and there is certainly no shortage of discussion on the topic of polling place performance. And, we’re all aware that the President has taken issue with it and after a couple of mentions in speeches, created the Bauer-Ginsberg Commission. So, it seems reasonable to assume this idea of engaging some self-reporting isn’t entirely novel.
After all, its kewl to imagine being able to tell – in real time – what the current wait time at the polling place is, so a voter can avoid the crowds, or a news organization can track the hot spots of long lines. We do some "ideating" below but first I offer three observations from our noodling:
- It really is a good idea; but
- There’s a large lemon in it; yet
- We have the recipe for some decent lemonade.
Here’s the Ideation Part
Wouldn’t it be great if everybody could use an app on their smarty phone to say, “Hi All, its me, I just arrived at my polling place, the line looks a bit long.” and then later, “Me again, OK, just finished voting, and geesh, like 90 minutes from start to finish… not so good,” or “Me again, I’m bailing. Need to get to airport.”
And wouldn’t it be great if all that input from every voter was gathered in the cloud somehow, so I could look-up my polling place, see the wait time, the trend line of wait times, the percentage of my precinct’s non-absentee voters who already voted, and other helpful stuff? And wouldn’t it be interesting if the news media could show a real time view across a whole county or State?
Well, if you’re reading this, I bet you agree, “Yes, yes it would." Sure. Except for one thing. To be really useful it would have to be accurate. And if there is a question about accuracy (ah shoot, ya know where this is going, don-cha?) Yes, there is always that Grinch called “abuse.”
Sigh. We know from recent big elections that apparently, partisan organizations are sometimes willing to spend lots of money on billboard ads, spam campaigns, robo-calls, and so on, to actually try to discourage people from going to the polls, within targeted locales and/or demographics. So, we could expect this great idea, in some cases, to fall afoul of similar abuse. And that’s the fat lemon.
But, please read on.
Now, we can imagine some frequent readers spinning up to accuse us of wanting everything to be perfectly secure, of letting the best be the enemy of the good, and noting that nothing will ever be accomplished if first every objection must be overcome. On other days, they might be right, but not so much today.
We don’t believe this polling place traffic monitoring service idea requires the invention of some new security, or integrity, or privacy stuff. On the other hand, relying on the honor system is probably not right either. Instead, we think that in real life something like this would have a much better chance of launch and sustained benefit, if it were based on some existing model of voters doing mobile computing in responsible way that’s not trivial to abuse like the honor system.
And that lead us to the good news – you see, we have such an existing model, in real life. That’s the new ingredient, along with that lemon above, and a little innovative sugar, for the lemonade that I mentioned.
Stay tuned for Part 2, and while waiting you might glance at this.
Many thanks to the engaged audience for OSDVer Anne O'Flaherty's presentation yesterday at National Institute of Standards and Technology (NIST), which hosted a workshop on Common Data Formats (CDFs) and standards for data interchange of election data. We had plenty to say, based on our 2012 work with Virginia State Board of Elections (SBE), because that collaboration depends critically on CDFs. Anne and colleagues did a rather surprising amount of data wrangling over many weeks to get things all hooked up right, and the lessons learned are important for continuing work in the standards body, both NIST and the IEEE group working on CDF standards.
As requested by the attendees, here are online versions of the poster and the slides for the presentation "Bringing Transparency to Voter Registration and Absentee Voting."
An esteemed colleague noted the news of the USPS stopping weekend delivery, as part of a trend of slow demise of the USPS, and asked: will we get to the point where vote-by-mail is vote-by-Fedex? And would that be bad, having a for-profit entity acting as the custodian for a large chunk of the ballots in an election? The more I thought about it, the more flummoxed I was. I had to take off the geek hat and dust off the philosopher hat, looking at the question from a viewpoint of values, rather than (as would be my wont) requirements analysis or risk analysis. I goes like this ...
I think that Phil's question is based on assumption of some shared values among voters -- all voters, not just those that vote by mail -- that make postal voting acceptable because ballots are a "government things" and so is postal service. Voting is in part an act of faith in government to be making a good faith effort to do the job right, and keep the operations above a minimum acceptable level of sanity. It "feels OK" to hand a marked ballot to my regular neighborhood post(wo)man, but not to some stranger dropping off a box from a delivery truck. Translate from value to feeling to expectation: it's implied that we expect USPS staff to know that they have a special government duty in delivering ballots, and to work to honor that duty, regarding the integrity of those special envelopes as a particular trust, as well as their timely delivery.
- Having re-read all that, it sounds so very 20th century, almost as antique as lever machines for voting.
I don't really think that USPS is "the government" anymore, not in the sense that the journey of a VBM ballot is end-to-end inside a government operation. I'm not sure that Fedex or UPS are inherently more or less trustworthy. In fact they all work for each other now! And certainly in some circumstances the for-profit operations may to some voters feel more trustworthy -- whether because of bad experiences with USPS, or because of living overseas in a country that surveils US citizens and operates the postal service.
Lastly, I think that many people do share the values behind Phil's question -- I know I do. The idea makes me wobbly. I think it comes down to this:
- If you're wobbly on for-profit VBM, then get back into the voting booth, start volunteering to help your local election officials, and if they are effectively outsourcing any election operations to for-profit voting system vendors, help them stop doing so.
- If you not wobbly, then you're part of trend to trusting -- and often doing -- remote voting with significant involvement from for-profit entities - and we know where that is headed.
The issue with USPS shows that in the 21st century, any form of remote voting will involve for-profits, whether it is Fedex for VBM, or Amazon cloud services for i-voting. My personal conclusions:
- Remote voting is lower integrity no matter what, but gets more people voting because in-person voting can be such a pain.
- I need to my redouble efforts to fix the tech so that in-person voting is not only not a pain, but actually more desirable than remote voting.
I hate to see news outlets casting in a partisan political view the issues of voter registration and ready access to the voting booth. But don't give up on the NYT article Waiting Times at Ballot Boxes Draw Scrutiny despite its partisan lead sentence. I rarely do political commentary, but I'll to a little today, specifically in the context of this article, which is very revealing about a traditional -- and I think healthy -- polarity of the American political tradition. One side of the polarity sees issues like this not a problem per se, but a defect in the implementation of current rules. You might call this a "conservative" side of American political problem solving -- don't change the rules, but do act to improve the way that they're put into practice. In this case, for this view, the issue of long lines at polling places is an issue of capacity that's "very easily handled" as NYT quoted Sen. Grassley. The implied solution is more and smaller precincts, more polling places, more voting stations in the polling places, and faster procedures for voter check-in. (I should add that in the latter case, there are tech solutions, like our DIY voter check-in via the Voter Portal we did in 2012, or the Digital Pollbook project of this year.)
That really comes down to pumping more money into existing election operations.
The other side of the polarity, which you might call "progressive", sees issues like this as a problem that needs a solution by changing the current rules, which currently define a system that's not working. Further, the progressive sees such rule changes as an inherent part of a process of evolution of rules (a progression). In this case, for this view, traditional polling place operations are inherently flawed in practice; empirically, we have seen that it leads to hot spots where people wait a long time to check in to vote. The implied solution approach is to create or promote more changes in the voting processes -- early voting, voting centers, more absentee voting, approaches to absentee voting that don't depend on the USPS, separation of Federal elections form those messy local elections with mile-long ballots, … lots of ideas. (And some of them are partly technologically enabled!)
That really comes down to coming up with potentially a lot of money to run new programs to use these new voting methods.
I'm a conservative by temperament: don't fix it unless you're sure it's broken, try some incremental tweaks before replacing it, keep it simple, every change brings in a host of unintended side effects, better the devil you know, and so on. But I like both of these approaches, because they both have a common factor -- increased Federal spending on Federal elections. Believe me the local election officials need it!
Lastly, there's also a tech analogy in the nethead vs. bellhead polarity. Netheads see problems not as structural but in terms of capacity and scale; if your network isn't working right, throw in some more network capacity and compute capacity, but conserve the simplicity of the current structure. Bellheads want to implement progressively more careful control systems. It's a long story about how we now have both in a sort of wave/particle duality, but for this political issue, the geek approach of "both!", is really easy to say: Throw lots more resources at the long lines problem in the current system (line up to vote in person in one specific place on election day), while at the same time doing more with alternative methods; see what happens, and use the results to drive better application of more resources where that has a positive effect, and also use the results to drive more progress in tuning the new approaches (alternate voting methods) in parallel to the existing system with its preserved simplicity. I rarely expect the political process to be informed by the geek view, but there it is.
So, in regard to this supposed political tussle coming, I say: "I hope you both win!"
In this New Year, there are so many new opportunties for election tech work that our collective TrustTheVote head is spinning. But this week anyway, we're focused on next steps in our online voter registration (OVR) work -- planning sessions last week, meetings with state election officials this week, and I hope as a result, a specific plan of action on what we call will "Rocky 4". To refresh readers' memory, Rocky is the OVR system that spans several organizations:
- At OSDV, we developed and maintain the Rocky core software;
- RockTheVote adopted and continues to adopt extensions to it;
- RockTheVote also adapts the Rocky technology to its operational environment (more on that below, with private-label and API);
- Open Source Labs operates Rocky's production system, and a build and test environment for new software releases;
- Several NGOs that are RockTheVote partners also use Rocky as their own OVR system, essentially working with RTV as a public service (no fees!) provider of OVR as an open-source application-as-a-service;
- For a growing list of states that do OVR, Rocky integrates with the state OVR system, to deliver to it the users that RTV and these various other NGOs have connected to online a a result of outreach efforts.
With that recap in mind, I want to highlight some of the accomplishments that this collective of organizations achieved in 2012, and paved the way for more cool stuff in 2013.
- All told, this group effort resulted in over a million -- 1,058,994 -- voter registration applications completed.
- Dozens of partner organizations used Rocky to register their constituents, with the largest and most active being Long Distance Voter.
- We launched a private-label capability in Rocky (more below) that was used for the first time this summer, and the top 3 out of 10 private-label partners registered about 84,000 voters in the first-time use of this new Rocky feature, in a period of about 12 weeks.
- We launched an API in Rocky (more below), and the early adopter organizations registered about 20,000 voters.
That's what I call solid work, with innovative election technology delivering substantial public benefit.
Lastly, to set the stage for upcoming news about what 2013 holds, let me briefly explain 2 of the new technologies in 2012, because they're the basis for work in 2013. Now, from the very beginning of Rocky over 3 years ago, there was a feature called "partner support" where a a 3rd party organization could do a little co-branding in the Rocky application, get a URL that they could use to direct their users to Rocky (where the users would see the 3rd party org's logo), and all the resulting registration activity's stats would be available to the 3rd party org.
The Rocky API - But suppose that you're in an organization that has not just its own web site, but a substantial in-house web application? Suppose that you want your web application to do the user interaction (UI)? Well, the Rocky Application Programming Interface (API) is for just that. Your application do all the UI stuff, and when it's time to create a PDF for the voter to download, print, sign, and mail, your web app calls the Rocky API to request that, and get the results back. (There's an analogous workflow for integrating the state OVR systems for paperless online registration.) The Rocky backend does all the database work, PDF generation, state integration, stats, reporting, and the API also allows you to pull back stats if you don't want to manually use the Partners' web interface of Rocky.
Rocky Private Label - But suppose instead that you want something like that, but you don't actually want to run your own web application. Instead, you want a version of Rocky that's customized to look like a web property of your organization, even though it is operated by RockTheVote. That's what the private-label feature set is for. To get an idea of what it looks like, check out University of CA Student Association's private-label UI on Rocky, here.
That's the quick run-down on what we accomplished with Rocky in 2012, and some of the enabling technology for that. I didn't talk much about integration with state OVR systems, because enhancements to the 2012 "training wheels" is part of what we're up to now -- so more on that to come RSN.
And on behalf of all my colleagues in the TrustTheVote Project and at the OSDV Foundation, I want to thank RockTheVote, Open Source Labs, all the RTV partners, and last but not least several staff at state election offices, for making 2012 a very productive year in the OVR part of OSDV's work.
In my last post, I said that we might be onto something, an idea for many of the benefits of universal automatic permanent voter registration, without the need for Federal-plus-50-states overhaul of policy, election law, and election technology that would be required for actual UAP VR. Here is a sketch of what that might be. I think it's interesting not because of being complex or clever -- which it is not -- but because it is sufficiently simple and simple-minded that it might feasibly be used by real election officials who don't have the luxury to spend money to make significant changes to their election administration systems. (By the way, if you're not into tales of information processing systems, feel free to skip to the punchline in the last paragraph.) Furthermore -- and this is critical -- this idea is simple enough that a proof of concept system could be put into place quite quickly and cheaply. And in election tech today, that's critical. To paraphrase the "show me" that we hear often: don't just tell me ideas for election tech improvements; show me something I can see, touch, and try, that shows that it would work in my current circumstances. With input from some election officials about what they'd need, and what that "show me" would be, here is the basic idea ...
The co-ordination of existing databases that A.G. Holder called for would actually be a new system, a "federated database" that does not try to coordinate every VR status change of every person, but instead enables a best-efforts distribution of advisory information from various government organizations, to participating election officials who work on those two important principles that I explained in my last post. This is not a clearing-house, not a records matching system, but just something that distributes info about events.
Before I explain what the events could be and how the sharing happens, let me bracket the issue of privacy. Of course all of this should be done in a privacy-protecting way with anonymized data, and of course that's possible. But whenever I say "a person with a DOB of X" or something like that, remember that I am really talking about some DOB that is one-way-hashed for privacy. Secondly, for the sake of simple explanation, I'm assuming that SSN and DOB can be used as a good-enough nearly-unique identifier for these purposes, but the scheme works pretty much the same with other choices of identifying information. (By the way, I say nearly-unique because it is not uncommon for a VR database to have two people with the same SSN because of data-entry typos, hand-writing issues, and so forth.)
To explain this system, I'll call it "Holder" both because of the A.G. and because I like the idea that everything in this system is a placeholder for possible VR changes, rather than anything authoratative. And because this is a Federal policy goal, I'll tell a story that involves Federal activity to share information with states -- and also because right now that's one of the sources of info that states don't actually have today!
Now, suppose that every time a Federal agency -- say the IRS or HHS -- did a transaction with a person, and that involved the person's address, that agency posts a notification into "Holder" that says that on date D, a person with SSN and DOB of X and Y claimed a current address of Z. This is just a statement of what the agency said the person said, and isn't trying to be a change-of-address. And it might, but needn't always, include an indication of what type of transaction occurred. The non-authoratative part is important. Suppose there's a record where the X and Y match a registered voter Claire Cornucopia of 1000 Chapel St., New Haven CT, but the address in not in CT. The notification might indicate a change of address, but it might be a mistake too. Just today I got mail from of government organization that had initially sent it to a friend of mine in another state. Stuff happens.
State VR operators could access "Holder" to examine this stream of notifications to find cases where it seems to be about a voter that is currently registered in that state, or isn't but possibly should be. If there is a notification that looks like a new address for an existing voter, then they can reach out to the voter -- for example, email, postal mail to the current address on file, postal mail to the possibly new address. In keeping with current U.S. practice:
- it is up the voter to maintain their voter record;
- election officials must update a record when a voter sends a change;
- without info from a voter, election officials can change a record only in specific ways authorized by state election law.
The point here is to make it easier for election officials to find out that a person might ought take some action, and to help that person do so. The helping part is a separate matter, including online voter services, but conceivably, this type of system would work (albeit with a lower participation rate) in system limited to postal mail to voters asking them to fill out a paper form and mail it back.
Next, let's imagine the scenarios that this system might enable, in terms of the kinds of outreach that a voter could receive, not limited to change of address as I described above.
- "Hey, it looks like you changed your mailing address - does that mean that you changed your residence too? If so, here is how you should update your voter record …"
- "Hey, it looks like you now live in the state of XX but aren't registered to vote - if so, here is what you should do to find out if you're eligible to vote … …"
- "Hey, it looks like you just signed up for selective service - so you are probably eligible to vote too, and here is what you should do …"
Number 3 -- and other variations I am sure you can think of -- is especially important as a way to approximate the "automatic" part of A.G. Holder's policy recommendation, while number 1 is the "permanent" part, and number 2 is part of both.
With just a little trial-ballooning to date, I fairly confident that this "Holder" idea would complement existing VR database maintenace work, and has the potential to connect election officials with a larger number of people than they currently connect with. And I know for sure that this does not require election officials to change the existing way that they manage voter records. But how about technical feasibility, cost, and so on. Could it pass the "show me" test?
Absolutely, yes. We've done some preliminary work on this is, and it is the work of a few weeks to set up the federated database, and the demo systems that show how Federal and state organizations would interact with it. But I don't mean that it would be a sketchy demo. In fact, because the basic concept is so simple, it would be a nearly complete software implementation of the federated database and all the interactions with it. Hypothetically, if there were a Federal organization that would operate "Holder", and enough states that agreed that its interface met their needs for getting started, a real "Holder" system could be set up as quickly as that organization could amend a services agreement with one of its existing I.T. service provider organizations, and set up MOUs with other Federal agencies.
Which is of course, not exactly "quick" but the point is that the show-me demonstrates this the enabling technology exists in an immediately usable (and budgetable) form, to justify embarking on the other 99% of the work that is not technology work. Indeed, you almost have to have the tech part finished, before you can even consider the rest of it; an idea by itself will not do.
Lastly, is this reasonable or are we dreaming again? Well, let's charitably say that we are dreaming the same voting rights dream that A.G. Holder has, and we're here to say from the standpoint of election technology, that we could do the tech part nearly overnight, in a way that enables adoption that requires much administrative activity, but not legal or legislative activity. For techies, that's not much of a punchline, but for policy folks who want to "fix that" quickly, it may be a very pleasant surprise.