We're helping the formation of a new study group regarding online voter registration data and protocol standards. This week, we're getting started. You need not be a geek or a member of the standards working groups or any kind of a techie to get involved. Here's the opportunity to engage...
Viewing entries tagged
This is a more technical post than others here given the broadening of an audience visiting this Foundation web site in search for content like this article below rather than hanging out on our more geeky Project site (which is soon to be relaunched and be way more engaging for all audiences, we're excited to report). Usually, you will find this kind of content over there, while here we'll talk more about voting experience innovations, policy matters, and progress of the Project. So, for those who are passionate about elections reform and improving the voting experience, but are not as fluent in some of the technical issues, feel free to look this over, but do not fret if seems like gobbledygook. There is more relevant stuff for your concerns to come. Ready? Here we go...
Elections data standards are essential to delivering real innovation. The annual Election Data Standards meeting opened today in Los Angeles, CA. We thought we'd give you an overview of just what in the hec this is about and why its essential to creating a voting experience that's easy, convenient, and dare we say delightful. Dry? Kinda. But a peek at the real in the trenches work we're doing. Yep.
The TrustTheVote Project Core Team has been hard at work on the Alpha version ofVoteStream, our election results reporting technology. They recently wrapped up a prototype phase funded by the Knight Foundation, and then forged ahead a bit, to incorporate data from additional counties, provided by by participating state or local election officials after the official wrap-up.
Along the way, there have been a series of postings here that together tell a story about the VoteStream prototype project. They start with a basic description of the project in Towards Standardized Election Results Data Reporting and Election Results Reload: the Time is Right. Then there was a series of posts about the project’s assumptions about data, about software (part one and part two), and about standards and converters (part one and part two).
Of course, the information wouldn’t be complete without a description of the open-source software prototype itself, provided Not Just Election Night: VoteStream.
Actually the project was as much about data, standards, and tools, as software. On the data front, there is a general introduction to a major part of the project’s work in “data wrangling” in VoteStream: Data-Wrangling of Election Results Data. After that were more posts on data wrangling, quite deep in the data-head shed — but still important, because each one is about the work required to take real election data and real election result data from disparate counties across the country, and fit into a common data format and common online user experience. The deep data-heads can find quite a bit of detail in three postings about data wrangling, in Ramsey County MN, in Travis County TX, and in Los Angeles CountyCA.
Today, there is a VoteStream project web site with VoteStream itself and the latest set of multi-county election results, but also with some additional explanatory material, including the election results data for each of these counties. Of course, you can get that from the VoteStream API or data feed, but there may be some interest in the actual source data. For more on those developments, stay tuned!
If you've read some of the ongoing thread about our VoteStream effort, it's been a lot about data and standards. Today is more of the same, but first with a nod that the software development is going fine, as well. We've come up with a preliminary data model, gotten real results data from Ramsey County, Minnesota, and developed most of the key features in the VoteStream prototype, using the TrustTheVote Project's Election Results Reporting Platform. I'll have plenty to say about the data-wrangling as we move through several different counties' data. But today I want to focus on a key structuring principle that works both for data and for the work that real local election officials (LEOS) do, before an election, during election night, and thereafter.
Put simply, the basic structuring principle is that the election definition comes first, and the election results come later and refer to the election definition. This principle matches the work that LEOs do, using their election management system to define each contest in an upcoming election, define each candidate, and do on. The result of that work is a data set that both serves as an election definition, and also provides the context for the election by defining the jurisdiction in which the election will be held. The jurisdiction is typically a set of electoral districts (e.g. a congressional district, or a city council seat), and a county divided into precincts, each of which votes on a specific set of contests in the election.
Our shorthand term for this dataset is JEDI (jurisdiction election data interchange), which is all the data about an election that an independent system would need to know. Most current voting system products have an Election Management System (EMS) product that can produce a JEDI in a proprietary format, for use in reporting, or ballot counting devices. Several states and localities have already adopted the VIP standard for publishing a similar set of information.
We've adopted the VIP format as the standard that that we'll be using on the TrustTheVote Project. And we're developing a few modest extensions to it, that are needed to represent a full JEDI that meets the needs of VoteStream, or really any system that consumes and displays election results. All extensions are optional and backwards compatible, and we'll be submitting them as suggestions, when we think we got a full set. So far, it's pretty basic: the inclusion of geographic data that describes a precinct's boundaries; a use of existing meta-data to note whether a district is a federal, state, or local district.
So far, this is working well, and we expect to be able to construct a VIP-standard JEDI for each county in our VoteStream project, based on the extant source data that we have. The next step, which may be a bit more hairy, is a similar standard for election results with the detailed information that we want to present via VoteStream.
PS: If you want to look at a small artificial JEDI, it's right here: Arden County, a fictional county that has just 3 precincts, about a dozen districts, and Nov/2012 election. It's short enough that you can page through it and get a feel for what kinds of data are required.
Last time, I explained how our VoteStream work depends on the 3rd of 3 assumptions: loosely, that there might be a good way to get election results data (and other related data) out of their current hiding places, and into some useful software, connected by an election data standard that encompasses results data. But what are we actually doing about it? Answer: we are building prototypes of that connection, and the lynchpin is an election data standard that can express everything about the information that VoteStream needs. We've found that the VIP format is an existing, widely adopted standard that provides a good starting point. More details on that later, but for now the key words are "converters" and "connectors". We're developing technology that proves the concept that anyone with basic data modeling and software development skills can create a connector, or data converter, that transforms election data (including but most certainly not limited to vote counts) from one of a variety of existing formats, to the format of the election data standard.
And this is the central concept to prove -- because as we've been saying in various ways for some time, the data exists but is locked up in a variety of legacy and/or proprietary formats. These existing formats differ from one another quite a bit, and contain varying amounts of information beyond basic vote counts. There is good reason to be skeptical, to suppose that is a hard problem to take these different shapes and sizes of square data pegs (and pentagonal, octahedral, and many other shaped pegs!) and put them in a single round hole.
But what we're learning -- and the jury is still out, promising as our experience is so far -- that all these existing data sets have basically similar elements, that correspond to a single standard, and that it's not hard to develop prototype software that uses those correspondence to convert to a single format. We'll get a better understanding of the tricky bits, as we go along making 3 or 4 prototype converters.
Much of this feasibility rests on a structuring principle that we've adopted, which runs parallel to the existing data standard that we've adopted. Much more on that principle, the standard, its evolution, and so on … yet to come. As we get more experience with data-wrangling and converter-creation, there will certainly be a lot more to say.
This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.
We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems. These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.
Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project
We appreciate an opportunity to participate in this important discussion. We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.
For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point. Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model. This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.
We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system. There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.
We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California. Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.
2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component. Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner. With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."
The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem. And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote. They deserve congratulations for reaching this imperative state of data standards détente. It's not finished, but the effort and momentum is there.
So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.
3. Commercial Revitalization
Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry. Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation. Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.
One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery. To that point, let us offer an important clarification and dispel a misconception about rolling your own. This does not mean that elections officials are about to be left to self-vend. And by that we mean self-construct and support their open, standard, commodity voting system components. A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type. Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software. Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.
Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation. To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.
In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.
EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California. We don't read SB-360 that way at all. We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest. In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed. We simply need to modernize how certification works to enable this kind of innovation. We think our comments today bear that out.
Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360. In sum and substance the agenda is to discuss:
- Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
- Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
- Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.
Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions. We think this allegation is also misinformed for two reasons:  assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and  notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned. From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away. And indeed, this is what we've witnessed over the years in other jurisdictions.
In my last post, I said that the time is right for breaking the logjam in election results reporting, enabling a big reload on technology for reporting, and big increase in public transparency. Now, let me explain why, starting with the biggest of several reasons. Elections data standards are needed to define common data formats into which a variety of results data can converted.
Those standards are emerging now, and previously the lack of them was a real problem.
- We can't reasonably expect a local elections office to take additional efforts to publish the data, or otherwise serve the public with election results services, if the result will be just one voice in a Babel of dozens of different data languages and dialects.
- We can't reasonably expect a 3rd party organization to make use of the data from many sources, unless it's available in a single standard format, or they have the wherewithal to do huge amounts of work on data conversion, repeatedly.
The good news is that election data standards have come along way in the last couple of years, due to:
- Significant support from a the U.S. Governments standards body -- the National Institute of Standards and Technology (NIST);
- Sustained effort from the volunteers working in standards committees in the international standards body -- the IEEE 1622 Working Group; and
- Practical experience with evolving de facto standards, particularly with the data formats and services of the Pew Voting Information Project (VIP), and the several elections organizations that participate in providing VIP data.
There are other reasons why the time is right, but they are more widely understood:
- We now have technologies that perennially understaffed and underfunded elections organization can feasibly adopt quickly and cheaply including powerful web application frameworks, supported by cloud hosting operations, within a growing ecosystem of web services that enable many organizations to access a variety of data and apps.
- "Open government," "open data," and even "big data" are buzz phrases now commonly understood, which describe a powerful and maturing set of technologies and IT practices. This new language of government IT innovation facilitates actionable conversations about the opportunity to provide the public with far more robust information on elections and their participation and performance.
It's a "promised land" of government IT and the so-called Gov 2.0 movement (arguably we think more like Gov 3.0 when you think about it in terms of 2.0 was all about collaboration and 3.0 is becoming all about the "utility web"--real apps available on demand -- a direction some of these services will inevitably take). However, for election technology in the near term, we first have to cross the river by learning how to "get the data out" (and that is more like Gov 2.0) More next time on our assumptions about how that river can be crossed, and our experiences to date on doing that crossing.
Now that we are a ways into our "Election Night Reporting System" project, we want to start sharing some of what we are learning. We had talked about a dedicated Wiki or some such, but our time was better spent digging into the assignment graciously supported by the Knight Foundation Prototype Fund. Perhaps the best place to start is a summary of what we've been saying within the ENRS team, about what we're trying to accomplish. First, we're toying with this silly internal project code name, "ENRS" and we don't expect it to hang around forever. Our biggest grip is that what we're trying to do extends way beyond the night of elections, but more about that later.
Our ENRS project is based on a few assumptions, or perhaps one could say some hypotheses that we hope to prove. "Prove" is probably a strong word. It might better to say that we expect that our assumptions will be valid, but with practical limitations that we'll discover.
The assumptions are fundamentally about three related topics:
- The nature and detail of election results data;
- The types of software and services that one could build to leverage that data for public transparency; and
- Perhaps most critically, the ability for data and software to interact in a standard way that could be adopted broadly.
As we go along in the project, we hope to say more about the assumptions in each of these areas.
But it is the goal of feasible broad adoption of standards that is really the most important part. There's a huge amount of latent value (in terms of transparency and accountability) to be had from aggregating and analyzing a huge amount of election results data. But most of that data is effectively locked up, at present, in thousands of little lockboxes of proprietary and/or legacy data formats.
It's not as though most local election officials -- the folks who are the source of election results data, as they conduct elections and the process of tallying ballots -- want to keep the data locked up, nor to impede others' activities in aggregating results data across counties and states, and analyzing it. Rather, most local election officials just don't have the means to "get the data out" in way that supports such activities.
We believe that the time is right to create the technology to do just that, and enable election officials to use the technology quickly and easily. And this prototype phase of ENRS is the beginning.
Lastly, we have many people to thank, starting with Chris Barr and the Knight Foundation for its grant to support this prototype project. Further, the current work is based on a previous design phase. Our thanks to our interactive design team led by DDO, and the Travis County, TX Elections Team who provided valuable input and feedback during that earlier phase of work, without which the current project wouldn't be possible.
"Why is There a Voting Tech Logjam?" -- that's a good question! A full answer has several aspects, but one of them is the acitivty (or in-activity) at the Federal level, that leads to very limited options in election tech. For a nice pithy explanation of that aspect, check out the current issue of the newsletter of the National Conference of State Legislators, on page 4. One really important theme addressed here is the opportunity for state lawmakers to make their decisions about what standards to use, to enable the state's local election officials make their decisions about what technology to make or adopt -- including purchase, in-house build, and (of course) adoption and adaptation of open-source election technology.
In my last post, I said that we might be onto something, an idea for many of the benefits of universal automatic permanent voter registration, without the need for Federal-plus-50-states overhaul of policy, election law, and election technology that would be required for actual UAP VR. Here is a sketch of what that might be. I think it's interesting not because of being complex or clever -- which it is not -- but because it is sufficiently simple and simple-minded that it might feasibly be used by real election officials who don't have the luxury to spend money to make significant changes to their election administration systems. (By the way, if you're not into tales of information processing systems, feel free to skip to the punchline in the last paragraph.) Furthermore -- and this is critical -- this idea is simple enough that a proof of concept system could be put into place quite quickly and cheaply. And in election tech today, that's critical. To paraphrase the "show me" that we hear often: don't just tell me ideas for election tech improvements; show me something I can see, touch, and try, that shows that it would work in my current circumstances. With input from some election officials about what they'd need, and what that "show me" would be, here is the basic idea ...
The co-ordination of existing databases that A.G. Holder called for would actually be a new system, a "federated database" that does not try to coordinate every VR status change of every person, but instead enables a best-efforts distribution of advisory information from various government organizations, to participating election officials who work on those two important principles that I explained in my last post. This is not a clearing-house, not a records matching system, but just something that distributes info about events.
Before I explain what the events could be and how the sharing happens, let me bracket the issue of privacy. Of course all of this should be done in a privacy-protecting way with anonymized data, and of course that's possible. But whenever I say "a person with a DOB of X" or something like that, remember that I am really talking about some DOB that is one-way-hashed for privacy. Secondly, for the sake of simple explanation, I'm assuming that SSN and DOB can be used as a good-enough nearly-unique identifier for these purposes, but the scheme works pretty much the same with other choices of identifying information. (By the way, I say nearly-unique because it is not uncommon for a VR database to have two people with the same SSN because of data-entry typos, hand-writing issues, and so forth.)
To explain this system, I'll call it "Holder" both because of the A.G. and because I like the idea that everything in this system is a placeholder for possible VR changes, rather than anything authoratative. And because this is a Federal policy goal, I'll tell a story that involves Federal activity to share information with states -- and also because right now that's one of the sources of info that states don't actually have today!
Now, suppose that every time a Federal agency -- say the IRS or HHS -- did a transaction with a person, and that involved the person's address, that agency posts a notification into "Holder" that says that on date D, a person with SSN and DOB of X and Y claimed a current address of Z. This is just a statement of what the agency said the person said, and isn't trying to be a change-of-address. And it might, but needn't always, include an indication of what type of transaction occurred. The non-authoratative part is important. Suppose there's a record where the X and Y match a registered voter Claire Cornucopia of 1000 Chapel St., New Haven CT, but the address in not in CT. The notification might indicate a change of address, but it might be a mistake too. Just today I got mail from of government organization that had initially sent it to a friend of mine in another state. Stuff happens.
State VR operators could access "Holder" to examine this stream of notifications to find cases where it seems to be about a voter that is currently registered in that state, or isn't but possibly should be. If there is a notification that looks like a new address for an existing voter, then they can reach out to the voter -- for example, email, postal mail to the current address on file, postal mail to the possibly new address. In keeping with current U.S. practice:
- it is up the voter to maintain their voter record;
- election officials must update a record when a voter sends a change;
- without info from a voter, election officials can change a record only in specific ways authorized by state election law.
The point here is to make it easier for election officials to find out that a person might ought take some action, and to help that person do so. The helping part is a separate matter, including online voter services, but conceivably, this type of system would work (albeit with a lower participation rate) in system limited to postal mail to voters asking them to fill out a paper form and mail it back.
Next, let's imagine the scenarios that this system might enable, in terms of the kinds of outreach that a voter could receive, not limited to change of address as I described above.
- "Hey, it looks like you changed your mailing address - does that mean that you changed your residence too? If so, here is how you should update your voter record …"
- "Hey, it looks like you now live in the state of XX but aren't registered to vote - if so, here is what you should do to find out if you're eligible to vote … …"
- "Hey, it looks like you just signed up for selective service - so you are probably eligible to vote too, and here is what you should do …"
Number 3 -- and other variations I am sure you can think of -- is especially important as a way to approximate the "automatic" part of A.G. Holder's policy recommendation, while number 1 is the "permanent" part, and number 2 is part of both.
With just a little trial-ballooning to date, I fairly confident that this "Holder" idea would complement existing VR database maintenace work, and has the potential to connect election officials with a larger number of people than they currently connect with. And I know for sure that this does not require election officials to change the existing way that they manage voter records. But how about technical feasibility, cost, and so on. Could it pass the "show me" test?
Absolutely, yes. We've done some preliminary work on this is, and it is the work of a few weeks to set up the federated database, and the demo systems that show how Federal and state organizations would interact with it. But I don't mean that it would be a sketchy demo. In fact, because the basic concept is so simple, it would be a nearly complete software implementation of the federated database and all the interactions with it. Hypothetically, if there were a Federal organization that would operate "Holder", and enough states that agreed that its interface met their needs for getting started, a real "Holder" system could be set up as quickly as that organization could amend a services agreement with one of its existing I.T. service provider organizations, and set up MOUs with other Federal agencies.
Which is of course, not exactly "quick" but the point is that the show-me demonstrates this the enabling technology exists in an immediately usable (and budgetable) form, to justify embarking on the other 99% of the work that is not technology work. Indeed, you almost have to have the tech part finished, before you can even consider the rest of it; an idea by itself will not do.
Lastly, is this reasonable or are we dreaming again? Well, let's charitably say that we are dreaming the same voting rights dream that A.G. Holder has, and we're here to say from the standpoint of election technology, that we could do the tech part nearly overnight, in a way that enables adoption that requires much administrative activity, but not legal or legislative activity. For techies, that's not much of a punchline, but for policy folks who want to "fix that" quickly, it may be a very pleasant surprise.
In a public speech yesterday, U.S. Attorney General Eric Holder called for universal, automatic voter registration, and stated that current technology can accomplish that, despite the fact that the current system is complex and error-prone. As Reuters reported on Holder's remarks:
By coordinating existing databases, the government could register "every eligible voter in America" and ensure that registration did not lapse during a move.
That's easy to say, but it requires some careful thought to make it easy to do. After some discussion with election officials recently, I've concluded that it is in fact easy to do in tech terms, but not in a way that you might think. To explain, let me first say that one thing that's not going to happen anytime soon is a "federal government takeover" of voter registration. VR will remain a state responsibility for the medium term, I predict.
Second, something that might happen, but would be a bad idea, is the combination of inter-state record matching and automatic registration. Why? Because we've already seen that in some states, recent practice includes automatic de-registration: if a computer's matching algorithm says that you moved from one state to another, you get un-registered in the first state. (Though not registered in the second!) Of course that's a problem if the match is incorrect -- and we've seen plenty examples of dodgey databases yielding false positive matches -- but it also can be a problem even if it is correct.
Ironically, the most recent instance of that story I've heard personally was from a Yale political science professor who specializes in election observation in other countries, and is keenly aware of voter registration issues as a bar to voting. While retaining her residence in CT, the prof did something that looked to some computer like taking up residence at another address -- result: CT's VR system de-registered her. Not the right way of doing universal, automatic, permanent.
One state election official explained the higher-level issue to me recently with two main points.
1. The current system places responsibility on the citizen to apprise the appropriate government. So when it appears that there has been a change of address, the state VR operators should reach to the voter in question to get the real story from them. That includes making it easier for voters to quickly find out their VR status and get help on what they can do next. (Which is what we're doing with online VR technology this year.)
2. When deciding what to do about a reported VR change, the responsibility is the election official's not some computer's. Technology can help suggest to an election official that a voter's record may be out of date, but that should not mean that the voter record should invalidated, either automatically or with a pro-forma confirmation by a person who has no more information than the computer did. What should the election official do instead? See point #1 above!
In other words, a simple interpretation of Holder's words about database co-ordination can lead to data-mining and matching that is error prone not just because the databases have imperfect information, but also because some of the most important information -- voter's intent -- is not in the database, for example "I did a postal address forwarding from my CT home to a DC address not because I moved but because I'm visiting for several weeks and don't want to miss my mail."
So that got me thinking about functional requirements - surprise, techie thinks about requirements not policies! - and we came up with a way to use those two principles to deliver many of the benefits of universal automatic permanent registration, without actually changing election laws and overhauling existing voter database systems. What's required is an inter-government information sharing system:
- that can notify state VR system operators about events that are possibly relevant to VR, without having to be authoritative about the event or even the person involved;
- that can enable state VR system operators to take further steps to determine whether there's been an change in voter eligibility;
- is sufficiently flexible for a wide variety and number of government organizations to participate with ease.
In addition, not required, but darned useful to residents of the 21st century, this system would be complemented by online assistance to members of the public to help them quickly and accurately respond inquiries from election officials.
The latter we are, as I have said, already working on, and well into it. But that inter-government information sharing system, what is that? It would clearly have to be not complicated, not expensive, and not requiring changes in election law or policy. Is that possible?
I think so. Stay tuned, we may be on to something.
Despite today's blog docket being for RockTheVote, I just can't resist pointing out a recurring type of technology-triggered election dysfunction that is happening again, and is 100% preventable using election technology that we have already developed. Here's the scoop: in St. Lucie County, Florida, the LEOs are having trouble coming up with a county wide grand total of votes, because their adding machine (for totting up the the subtotals from dozens of voting machines) has a great feature for human error. The full details are bit complex in terms of handling of data sticks and error messages, but I've been told that in early voting in 94 precincts, 40 precincts weren't counted at all, and 54 were counted twice. Thank goodness someone noticed afterwards! (Well, 108 precincts totaled out of 94 might have been a tip off.) Sure, human error was involved, but it is not a great situation where software allows this human error to get through.
We're only talking about software that adds up columns of numbers here! A much better solution would be one where the software refuses to add in any sub-total more than once, and refuses to identify as a finished total anything where there is a sub-total missing. Of course! And I am sure that the vendor of St. Lucie's GEMS system has a fix for this problem in some later version of the software or some successor product. But that's just not relevant if an election official doesn't have the time, budget, support contract, or procurement authority to test the better upgrade, and buy it if it works satisfactorily!
What's sad is that it is completely preventable by using an alternative adding machine like the one we developed last year (OK, shameless plug) -- which of course does all these cross-checks. The LEOs would need to translate that vendor-proprietary subtotal data into a standard format -- and I know some volunteer programmers who I bet would do that for them. They'd need to use an ordinary PC to run the open source tabulation software -- and I know people who would set it up for them as a public service. And they'd have to spend less than half an hour using the system to get their totals, and comparing them to the totals that their GEMS system provided.
And maybe, in order for it to be kosher, it would have to be a "pilot effort" with oversight by the EAC; we've already discussed that with them and understand that the resource requirements are modest. I bet we could find a FL philanthropist who would underwrite the costs without a 2nd thought other than how small the cost was compared to the public benefit of the result - that is, avoiding one more day of delay in a series that's causing a State to not be done with the election, more than a week after election day.
It's just one example of the many possible election integrity benefits that can be demonstrated using technology that, so far at any rate, only non-commercial technologists have been willing to develop for governments to use to do their job correctly -- in this case, producing timely and accurate election results.
A couple of weeks ago I presented at OSCON and during the conference had an opportunity to sit down with Mac Slocum, Managing Editor for the O’Reilly Radar. We had about a half an hour conversation, for which we covered ~20 minutes of it on camera. You can find it here if you want to watch me jaw. But perhaps simpler below, I’ve listened to the tape, and captured the essence of my answers to Mac’s questions about what the Foundation is about and working on and the like. I promised Matt Douglass, our Public Relations Director I’d get this up for interested followers; apologize it took me a couple of weeks. So, here it is; again not an official transcript, but a compilation of my answers after watching and listening to the video interview about a dozen times (so you don't have to) combined with my recollection as close as I recall my remarks – expressed and intended.
O’Reilly: How are voting systems in the U.S. currently handled? In other words, where do they come from; procurement process; who decides/buys; etc.?
Miller: Voting systems are currently developed and delivered by proprietary systems vendors, and procured by local election jurisdictions such counties and townships. The States' role is to approve specific products for procurement, often requiring products to have completed a Federal certification process overseen by the EAC. However, the counties and local elections jurisdictions make the vast majority of elections equipment acquisition decisions across the country.
O’Reilly: So how many vendors are there? Or maybe more to the point, what's the state of the industry; who are the players; and what’s the innovation opportunity, etc.?
Miller: Most of the U.S. market is currently served by just 3 vendors. You know, as we sit here today, just two vendors control some 88% of America’s voting systems infrastructure, and one of them has a white-knuckled grip on 75% of that. Election Systems and Services is the largest, after having acquired Premier Systems from its parent company, Diebold. The DoJ interceded on that acquisition under a mandatory Hart-Scott-Rodino Act review to consider potential anti-trust issues. In their settlement with ES&S, the Company dealt off a portion of their technology (and presumably customers) to the Canadian firm Dominion Systems. Dominion was a small player in the U.S. until recently when it acquired those technology assets of Premier (as part of the DoJ acquisition, and acquired the other fomer market force, Sequoia. And that resulted in consolidating approximately 12% of the U.S. market. Most of the remaining U.S. market is served by Hart-Intercivic Systems.
On the one hand, I’d argued that the voting systems marketplace is so dysfunctional and malformed that there is no incentive to innovate, and at worst, there is a perverse disincentive to innovate and therefore really not much opportunity. At least that’s what we really believed when we started the Foundation in November 2006. Seriously, for the most part any discussion about innovation in this market today amounts to a discussion of ensuring spare parts for what’s out there. But really what catalyzed us was the belief that we could inject a new level of opportunity… a new infusion of innovation. So, we believe part of the innovation opportunity is demonstrated by the demise of Premier and Sequoia and now the U.S. elections market is not large or uniform enough to support a healthy eco-system of competition and innovation. So the innovation opportunity is to abandon the proprietary product model, develop new election technology in a public benefits project, and work directly with election officials to determine their actual needs.
O’Reilly: So what is the TrustTheVote Project, and how does that relates to the Foundation?
Miller: The Open Source Digital Voting Foundation is the enabling 501.c.3 public benefits corporation that funds and manages projects to develop innovative, publicly owned open source elections and voting technology. The TrustTheVote Project is the flagship effort of the Foundation to design and develop an entirely new ballot eco-system.
What we’re making is an elections technology framework built on breakthrough innovations in elections administration and management and ballot casting and counting that can restore trust in how America votes. Our design goal is to truly deliver on the four legs of integrity in elections: accuracy, transparency, trust, and security.
The reason we’re doing this is simple: this is the stuff of critical democracy infrastructure – something far too much of a public asset to privatize. We need to deliver what the market has so far failed to deliver. And we want to re-invent that industry – based on a new category of entrants – systems integrators who can take the open source framework, integrate it with qualified commodity hardware, and stand it up for counties and elections jurisdictions across the country.
We’re doing this with a small full time team of very senior technologists and technology business executives, as well as contractors, academia, and volunteer developers.
We’re 4 years into an 8 year undertaking – we believe the full framework will be complete and should be achieving widespread adoption, adaptation, and deployment by the close of 2016 – done right it can impact the national election cycle that year. That said, we’re under some real pressure to expedite this because turns out that a large number of jurisdiction will be looking to replace their current proprietary systems over the next 4 years as well.
O’Reilly: How can open source really improve the voting system?
Miller: Well, open source is not a panacea, but we think it’s an important enabler to any solution for the problems of innovation, transparency, and cost that burden today’s elections. Innovation is enabled by the departure from the proprietary product model, including the use of open-source licensing of software developed in a public benefits project. Transparency, or open-government features and capabilities of voting systems are largely absent and require innovation that the current market does not support. Cost reduction can be enabled by an open-source-based delivery model in which procurements allow system integrators to compete for delivery license-free voting systems, coupled with technical support that lacks the vendor lock-in of current procurements. Open source software doesn't guarantee any of these benefits, but it does enable them.
I should point out too, that one of our deepest commitments is to elections verification and auditability (sic). And our framework, based on an open standards common data format utilizing a markup language extension to XML called EML is the foundation on which we can deliver that. Likewise, I should point out our framework is predicated on a durable paper ballot of record… although we haven’t talked about the pieces of the framework yet.
O’Reilly: Well our time is limited, but you must know I can’t resist this last question, which is probably controversial but our audience is really curious about. Will online voting ever be viable?
Miller: Well, to be intellectually honest, there are two parts to that loaded question. Let me leave my personal opinion and the position of the Foundation out of it at first, so I just address the question in a sterile light.
First, online voting is already viable in other countries that have these 3 policy features:  a national ID system,  uniform standards for nationwide elections, and  have previously encouraged remote voting by mail rather than in-person voting. These countries also fund the sophisticated centralized IT infrastructure required for online voting, and have accepted the risks of malware and other Internet threats as acceptable parts of nationwide online voting. For a similar approach to be viable in the U.S., those same 3 policy features would likely require some huge political innovations, at the 50-plus state level, if not the Federal level. There really isn’t the political stomach for any of that and particularly national ID although arguably we already have it, or creating national elections and voting standards, let alone building a national elections system infrastructure. In fact, the National Association of State Secretaries recently passed – actually re-upped an earlier resolution to work to sunset the Federal Elections Assistance Commission. In other words, there is a real Federalist sense about elections. So, on this first point of socio-political requirements alone I don’t see it viable any time soon.
But letting our opinion slip into this, the Foundation believes there is a more important barrier from a technical standpoint. There are flat out technical barriers that have to be cleared involving critical security and privacy issues on the edge and at the core of a packet-switched based solution. Furthermore, to build the kind of hardened data center required to transact voting data is far beyond the financial reach of the vast majority of jurisdictions in the country. Another really important point is that online elections are difficult if not impossible to audit or verify. And finally, there is a current lack of sophisticated IT resources in most of the thousands of local elections offices that run elections in the U.S.
So, while elections remain a fundamentally local operation for the foreseeable future, and while funding for elections remains at current levels, and until the technical problems of security and privacy are resolved, nationwide online voting seems unlikely in the U.S.
That said, we should be mindful that the Internet cloud has darkened the doorstep of nearly every aspect of society as we’ve moved from the 2nd age of industrialism to the 3rd age of digitalism. And it seems a bit foolish to assume that the Internet will not impact the conduct of elections in years to come. We know there is a generation out there now who is maturing having never known any way to communicate, find information, shop, or anything other than online. Their phones exist in an always-on society and they expect to be able to do everything they need to interact with their government online. Whether that’s a reasonable expectation I don’t think is the issue.
But I think it will be important for someone to figure out what’s possible in the future – we can’t run and hide from it, but I believe we’re no where near being able to securely and verifiably use the Net for elections. There is some very limited use in military and overseas settings, but it needs to be restricted to venues like that until the integrity issues can be ironed out.
So, we’re not supporters of widespread use of the Internet for voting and we don’t believe it will be viable in the near future on a widespread basis. And honestly, we have too much to do in just improving upon ballot casting and counting devices in a polling place setting to spend too many cycles thinking about how to do this across the Internet.
24 hours ago I, along with some others, was actually considering asking for a refund. We had come to the EAC, NIST, and FVAP co-hosted UOCAVA Remote Voting Systems 2 Day Workshop, expecting to feast on some fine discussions about the technical details and nuances of building remote voting systems for overseas voters that could muster the demands of security and privacy. And instead we had witnessed an intellectual food fight of ideology. That all changed in a big way today.
The producers and moderators of the event, I suspect sensing the potential side effects of yesterdays outcome -- came together, somehow collectively made some adjustments (in moderation techniques, approach, and topic tweaking), and pulled off an excellent, informative day full of the kind of discourse I willingly laid down money (the Foundation's money no less) in the first place to attend.
My hat is off; NIST and EAC on the whole did a great job with a comeback performance today that nearly excused all of what we witnessed yesterday. Today, they exhibited self deprecating humor, and even had elections officials playing up their drunk driver characterization from the day before.
Let me share below what we covered; it was substantive. It was detailed. And it was tiring, but in a good way. Here it is:
Breakout Session – Voter Authentication and Privacy
--Identified voter authentication and privacy characteristics and risks of the current UOCAVA voting process.
--Identified potential risks related to voter authentication and privacy of remote electronic absentee voting systems. For example, the group considered:
- Ballot secrecy
- Coercion and/or vote selling
- Voter registration databases and voter lists
- Strength of authentication mechanisms
- Susceptibility to phishing/social engineering
- Usability and accessibility of authentication mechanisms
- Voter autonomy
- Other potential risks
--Considered measures and/or criteria for assessing and quantifying identified risks and their potential impacts.
- How do these compare to those of the current UOCAVA voting processes?
--Identified properties or characteristics of remote digital voting absentee voting systems that could provide comparable authentication mechanisms and privacy protections as the current UOCAVA voting process
--Considered currently available technologies that can mitigate the identified risks. How do the properties or characteristics of these technologies compare to those of the current UOCAVA voting process?
--Started to identify and discuss emerging or future research areas that hold promise for improving voter authentication and/or privacy. For example:
- Biometrics (e.g., speaker voice identification)
- Novel authentication methods
--Chatted about cryptographic voting protocols and other cryptographic technologies
Breakout Session – Network and Host Security
--Identified problems and risks associated with the transmission of blank and voted ballots through the mail in the current UOCAVA voting process.
--Identified risks associated with electronic transmission or processing of blank and voted ballots. For example, the breakout group considered:
- Reliability and timeliness of transmission
- Availability of voting system data and functions
- Client-side risks to election integrity
- Server-side risks to election integrity
- Threats from nation-states
- Other potential risks
--Considered and discussed measures and/or criteria for assessing and quantifying identified risks and their potential impacts.
- How do these compare to those of the current UOCAVA voting process
--Identified properties or characteristics of remote digital absentee voting systems that could provide for the transmission of blank and voted ballots at least as reliably and securely as the current UOCAVA voting process.
--Discussed currently available technologies that can mitigate the identified risks and potential impact.
- How do the properties and characteristics of these technologies compare to those of the current UOCAVA voting process?
--Identified and discussed emerging or future research areas that hold promise for improving network and host security. For example:
- Trusted computer and trusted platform models
- End point security posture checking
- Cloud computing
- Semi-controlled platforms (e.g., tablets, smart phones, etc.)
- Use of a trusted device (e.g., smart card, smart phone, etc.)
As you can see, there was a considerable amount of information covered in each 4 hour session, and then the general assembly reconvened to report on outcomes of each breakout group.
Did we solve any problems today? Not so much. Did we come a great deal forward in challenge identification, guiding principles development, and framing the issues that require more research and solution formulation? Absolutely.
Most importantly, John Sebes, our CTO and myself gained a great deal of knowledge we can incorporate into the work of the TrustTheVote Project, had some badly needed clarifying discussions with several, and feel we are moving in the right direction.
We clarified where we stand on use of the Internet in elections (its not time beyond anything but tightly controlled experimentation, and there is a lacking of understanding of the magnitude of resources required to stand up sufficiently hardened data centers to make it work, let alone figuring out problems at the edge.)
And we feel like we made some small contributions to helping the EAC and NIST figure out the kind of test Pilot they wish to stand up as a guiding principles reference model sometime over the next 2 years.
Easily a day's work for the 50-60 people in attendance over the two days.
Back to the west coast (around 3am for my Pacific colleagues ;-)
Its a wrap GAM|out
I came across an interesting article about voter registration: "The Alternative to Universal Voter Registration" where John N. Hall strongly supports Automatic Registration (AR) over Universal Voter Registration (UVR). To people who are not election experts the distinction is a bit subtle. UVR has states proactively try to register everyone to vote, while AR has the federal government somehow use Social Security records to automatically register people.
In Mr Hall's words, AR can be implemented by doing the following:
"Computer programs read through the Social Security Administration database, extract the data of age-eligible citizens, then send that data to the states." (from The Alternative To Universal Voter Registration)
Now that I started writing this post and did some more googling I see that this is already a heavily debated topic that has flown back and forth, starting with John Fund's (of the Wall Street Journal) original talk, to Rep. Barney Frank's angry denunciation that he had nothing to do with the claims that he was involved in any way with Universal Voter Registration, to Mr Fund's retraction of the claim, to more links than you can shake as stick at about the topic.
Anyway as usual I am johnny come lately. Phew. My original thought though was about the original post. You see, John Hall, being a "computer programmer" makes it sounds simple:
"Voila! Why make mandates on all those state agencies and dragoon all that manpower entailed by UVR when a computer program can register everyone? A competent programmer could write the extract program in his sleep." (from The Alternative to Universal Voter Registration)
Any description of this that starts with "Voila" and ends with "in his sleep" is ... well let's just say, it must be a bit of a simplification....
For example I would imagine that there would be major privacy concerns about sending information out of the social security systems out to each of the states. I am not sure that the states registration records even include social security information. And what about all the voters who don't have social security numbers, there must be some, or many? And how quickly are the social security databases updated when people die?
Gregory Miller of the OSDV Foundation will be provide testimony during State of California Hearings on Future of Elections Systems next Monday, February 8th. CA Secretary of State Debra Bowen requested elections and voting systems experts from around the country to attend and testify, and answer questions about the current election administration landscape and how California can best prepare for the future. The Secretary noted in a prepared statement:
Demands for increased transparency and services, shrinking government budgets, and technological advances that outpace elections laws and regulations have combined to challenge what many thought were ‘permanent’ solutions developed as part of the 2002 Help America Vote Act. Many in California and across the nation are ready to move in a new direction. The question is, what should Californians seek in the next generation of voting equipment and how can new products truly serve the interests of voters?
Secretary Bowen will preside over the Hearing, joined by county elections executives from Los Angeles, Orange, Sacramento, San Joaquin, Santa Cruz and Madera counties. In addition to the testimony from OSDV, wide-ranging testimony will come from the U.S. Election Assistance Commission, Pew Center on States, the Federal Voting Assistance Program, representatives from every major voting system manufacturer with contracts in California, and more. The complete agenda is available here.
California has a strong record of thoughtful analysis of its voting systems. In 2007, Secretary Bowen led a top-to-bottom review of certified voting systems. Bowen asserted from the outset that the review:
Ensure that California’s voters cast their ballots on voting systems that are secure, accurate, reliable, and accessible.
And following the top-to-bottom review, on August 3, 2007, Secretary Bowen strengthened the security requirements and use conditions for certain systems.
So its no surprise to us that continuing developments in the elections technology industry as well as legislative initiatives are leading the Secretary to conduct this Hearing next Monday. Part of that change is best evidenced by the MOVE Act.
We'll discuss more about the MOVE Act in other posts, but in summary, President Obama signed the Military and Overseas Voter Empowerment (MOVE) Act in October 2009. The most immediate impact of the law from the State perspective has to do with the provision that establishes a 45-day deadline for States to provide ballots to voters. Because Primary results need to be certified and General ballots need to be constructed and conveyed, additional time (beyond 45 days) is required to meet the new federal guideline. And the largest impact on elections technology, processes, and practices is two principle provisions of the Act that mandate States shall provide:
- A digital means by which overseas voters can verify and manage their voter registration status; and
- A digital means by which an overseas voter can receive a digital, download ready, blank ballot (think PDF).
Success in implementing these mandates will reduce lost participation of overseas voters, which studies have shown result in approximately 1 out of every 4 overseas ballots not being counted because of failure to arrive in time.
But if it were only that easy. You see, in 2008, many States changed their Primary dates by several months to allow their voters to more heavily impact the presidential nomination process. And additional moves are likely in 2010 because 11 states and the District of Columbia have Primaries so close to the General Election that ballots may not be produced in time to comply with the new MOVE Act law. California has a very large overseas and military voting contingent, and you can imagine MOVE Act mandates are on the minds of CA elections officials, State legislatures, and the Secretary.
Of equal interest, Los Angeles County, the largest election jurisdiction in the United States, is engaged in a process known as the Voting Systems Assessment Project (VSAP) to determine the design of their next generation voting system.
Serving over 4 million registered voters, the County is examining the ways in which it can modernize its voting systems. Dean Logan, the County Registrar and Ken Bennett, the County IT Director are working to analyze the ways in which technology can ensure their ability to meet operational mandates and better serve their voters. With the VSAP underway (a project the OSDV Foundation is participating in), our "take" is that more (and possibly dramatic) change in elections technology in the great State of California is all but assured.
Stepping back, the current voting technology used in Los Angeles County and elsewhere is provided by private companies; they offer election jurisdictions proprietary technology solutions that need to be certified by the CA Secretary of State. While there is oversight at a State level, and mandates at the Federal level, each jurisdiction must purchase their own technology and do the very important business of conducting elections. Consequently, jurisdictions find themselves in multi-year contracts for technology.
This gives a jurisdiction continuity, but impairs their ability to innovate and collaborate, learning from neighboring or similar jurisdictions elsewhere in the state or country.
With L.A. County -- the largest elections jurisdiction in the nation -- considering the future of elections technology for their voters, the mandates of the MOVE Act implementation bearing down, and the complexities of the largest States' processes and regulations for selection and implementation of elections technology, the Secretary's Hearing next week is of a near essential nature.
So we are honored to be asked to testify next week. And the timing is good. As a means to developing a holistic architecture for next generation systems, one of the imperative elements is a common data format for the exchange of election event data. This is one particular element we're working on right now. In fact, we will shortly be collaborating with a group of States and jurisdictions on the testing of several framework components including: election event management, ballot preparation, and automated generation of printable ballots (watch for this announcement shortly).
Here’s the cool thing: It turns out that all of this work currently underway in the TrustTheVote Project which is leveraging this common data format and some other innovations, provides a ready-made open source freely available solution to implement the mandates of the MOVE Act.
So, we hope that this work will prove to be relevant and purposeful for the Hearings. Our opportunity to testify is timely because we believe our work is in line with the agenda driving the hearing: What do next generation systems look like and how do states like CA comply with Federal mandates? How can we develop quickly to adapt to changing needs on the ground from elections officials, voters, and federal requirements?
We're excited to participate; go Greg!
Stay tuned; more to come. -Matt
New York state recently certified two voting systems, and the end of the process is an interesting insight into current certification and standards -- particularly the view of the dissent-voting participant, Bo Lipari, who explained his vote in his blog My Vote on NY Voting Machine Certification. It's certainly worth reading Bo's complete rationale, but I think that the most important take-away is very aptly expressed by today's guest blogger, Candice Hoke, the Director of the Center for Election Integrity and Associate Professor of Law at the Cleveland-Marshall College of Law at Cleveland State University. I read Bo Lipari's blog regarding the NY VS certification issue, and the 9:1 vote in favor of certification, with Bo's vote the only dissent. To provide a lawyer's view, I would mention that Anglo-American law includes a principle termed "substantial compliance." It has limitations and caveats, but it's worth considering how this principle might apply to the voting tech certification area, or instead be excluded from it.
At base, Bo's blog, and certification facts he presents, pose a very important question:
Do we really want voting system vendors to be able to "substantially comply" with the certification standards, or do we want to require more rigorous, complete compliance; and if so, why?
This is a critical question, of course. Certainly, in the earlier NASED certification process, the ITAs (labs operating as Independent Testing Authorities) viewed substantial compliance to be all that was required. The ITA view of “substantial” seemed to be inchoate and ad hoc, perhaps based on a general gestalt of the voting system product under review. As the California TTBR and other independent voting system studies documented, "substantial” offers a great deal of interpretive wiggle room.
My thanks to Candice both for posing this important question and for pointing that any answer is not going to be tidy, whether it is black-or-white, or a paler shade of gray.
Following a previous post with before-and-after pictures of an ideal "re-modeling"of a ballot, I have a couple notes about how such remodeling is harder in practice; another ballot image to illustrate; and some good news about on-going TTV work on ballot image processing. That ideal remodeling showed how to both fix one of class of usability flaws (visually "losing" some of the candidates in a race), and typical approach to increase accessibility, abandoning the eye-crossingly stark and skeletal black-and-white layout for one with colors, shading, fonts, and space to help visually separate distinct elements and visual highlight important elements. But the the "after" picture is idealistic in two ways.
 The full range of accessibility issues is much larger. To get an idea of the how much larger -- for example, variations on one color or two, one language or two, paper size, placement of instruction text -- check out part of the results of the AIGA work on ballot design. Or, take a look at the below sample image, which shows some of the fruit of two years of expert input and testing -- which we at TTV are very fortunate to be able to leverage!
 The intended use of these images is to be printed as paper ballots that are marked by voters (manually or with digital assistance) and can be counted by an optical scanner. The previous "after" picture lacked the big visual mess of a bunch of black rectangles that leapt to the eye much more than the actual ballot information - needed for an optical scanner to orient the ballot image and find the marks. The AIGA sample below has these "timing marks" added back in, with a bit less visual clutter, but still a lot of them.
The good news I mentioned is about the timing marks and their usability impact. Results so far indicate that our scanner can get by just fine with only marks in each of the 4 corners -- thus dispensing with most of the usability impact of the timing marks. This may seem ultra geeky, but it is the sort techie result that keeps us going. ;-) More details on this result in a later post.
As you may know, our approach to developing software is kind of agile development meets high assurance. What the heck? We are now engaged in prototyping and modeling, so the slider is to the agile development side. But the high assurance part will come. And when it comes, and when we want our code to be certified, then clearly coding standards (and many other matters) will come to the fore. But for the moment, as you take a look at the code that we have already put out there on github, and other code that it is on it's way, remember where we are in evolution. For now, we feel that coding standards are kind of a moving target and so we are not going to be draconian in our oversight of that. In fact I have to say that harder than following a particular set of coding standards is ensuring that software we design and write is as simple, clear and well structured as possible, and then some. Personally I place a higher value on that than on whether we use 2 or 4 space tab settings ;) The other point worth noting is that different parts of the overall election technology suite are subject to different degrees of review and certification. For example, it stands to reason that the code driving the design of ballots is different than the code tabulating the vote.
So as software engineers who care about their work and especially where we are working on something as important as elections technology you can count on us writing code that we can be proud of. You won't find us crying crocodile tears when some of our code comes into the public domain and is scrutinized - after all, that's what we've been all about from the very start.