Viewing entries tagged
voting technology

Comment

Announcing Collaboration to Produce Global Election Technology Industry Study

This week the Wharton School together with its Public Policy Institute and the OSET Foundation announced an important industry research project to further inform business, government, and philanthropy on the state of the global election technology industry.  The research team is comprised of two principal investigators: Dr. Lorin Hitt of Wharton and Gregory Miller of the OSET Foundation, leading six Wharton students, and managed by Andrew Coopersmith of the Penn Wharton Public Policy Initiative ...

Comment

Comment

Ballots Are the "ROI" of Campaign Financing

The Center for Responsive Politics (“CRP”), OpenSecrets.Org published an article Tuesday detailing the jaw-dropping amount of money that has been invested already, and what is likely to be spent on this midterm election campaign cycle.  CRP is projecting that almost $4B will be expended on this election cycle. 

Presumably, all that spending is to encourage voters to cast ballots in favor of the candidate or cause the spending is directed at advocating.  But what is the impact of that spending if the systems on which those ballots are cast and counted are literally falling apart?  We submit that ballots are actually the "ROI" of campaign financing. And if one gives to any campaign, they ought to also commensurately support efforts to improve HOW America vote.  You see, today, how America votes is now just as important as who or what America votes for...

Comment

Comment

“Digital Voting”—Don’t believe everything you think

In at recent blog post we examined David Plouffe’s recent Wall Street Journal forward-looking op-ed [paywall] and rebalanced his vision with some practical reality.

Now, let’s turn to Plouffe’s notion of “digital voting.”  Honestly, that phrase is confusing and vague.  We should know: it catalyzed our name change last year from Open Source Digital Voting Foundation (OSDV) to Open Source Election Technology Foundation (OSET).

Comment

Comment

Money Shot: What Does a $40M Bet on Scytl Mean?

…not much we think.

Yesterday’s news of Microsoft co-founder billionaire Paul Allen’s investing $40M in the Spanish election technology company Scytl is validation that elections remain a backwater of innovation in the digital age.

But it is not validation that there is a viable commercial market for voting systems of the size typically attracting venture capitalists; the market is dysfunctional and small and governments continue to be without budget.

And the challenges of building a user-friendly secure online voting system that simultaneously protects the anonymity of the ballot is an interesting problem that only an investor of the stature of Mr. Allen can tackle.

We think this illuminates a larger question:

To what extent should the core technology of the most vital aspect of our Democracy be proprietary and black box, rather than publicly owned and transparent?

To us, that is a threshold public policy question, commercial investment viability issues notwithstanding.

To be sure, it is encouraging to see Vulcan Capital and a visionary like Paul Allen invest in voting technology. The challenges facing a successful elections ecosystem are complex and evolving and we will need the collective genius of the tech industry’s brightest to deliver fundamental innovation.

We at the TrustTheVote Project believe voting is a vital component of our nation’s democracy infrastructure and that American voters expect and deserve a voting experience that’s verifiable, accurate, secure and transparent.  Will Scytl be the way to do so?

The Main Thing

The one thing that stood out to us in the various articles on the investment were Scytl’s comments and assertions of their security with international patents on cryptographic protocols.  We’ve been around the space of INFOSEC for a long time and know a lot of really smart people in the crypto field.  So, we’re curious to learn more about their IP innovations.  And yet that assertion is actually a red herring to us.

Here’s the main thing: transacting ballots over the public packet switched network is not simply about security.   Its also about privacy; that is, the secrecy of the ballot.  Here is an immutable maxim about the digital world of security and privacy: there is an inverse relationship, which holds that as security is increased, privacy must be decreased, and vice-verse.  Just consider any airport security experience.  If you want maximum security then you must surrender a bunch of privacy.  This is the main challenge of transacting ballots across the Internet, and why that transaction is so very different from banking online or looking at your medical record.

And then there is the entire issue of infrastructure.  We continue to harp on this, and still wait for a good answer.  If by their own admissions, the Department of Defense, Google, Target, and dozens of others have challenges securifying their own data centers, how exactly can we be certain that a vendor on a cloud-based service model or an in-house data center of a county or State has any better chance of doing so? Security is an arms race.  Consider the news today about Heartbleed alone.

Oh, and please for the sake of credibility can the marketing machinery stop using the phrase “military grade security?”  There is no such thing.  And it has nothing to do with an increase in the  128-bit encryption standard RSA keys to say, 512 or 1024 bit.  128-bit keys are fine and there is nothing military to it (other than the Military uses it).  Here is an interesting article from some years ago on the sufficiency of current crypto and the related marketing arms race.  Saying “military grade” is meaningless hype.  Besides, the security issues run far beyond the transit of data between machines.

In short, there is much the public should demand to understand from anyone’s security assertions, international patents notwithstanding.  And that goes for us too.

The Bottom Line

While we laud Mr. Allen’s investment in what surely is an interesting problem, no one should think for a moment that this signals some sort of commercial viability or tremendous growth market opportunity.  Nor should anyone assume that throwing money at a problem will necessarily fix it (or deliver us from the backwaters of Government elections I.T.).  Nor should we assume that this somehow validates Scytl’s “model” for “security.”

Perhaps more importantly, while we need lots of attention, research, development and experimentation, the bottom line to us is whether the outcome should be a commercial proprietary black-box result or an open transparent publicly owned result… where the “result” as used here refers to the core technology of casting and counting ballots, and not the viable and necessary commercial business of delivering, deploying and servicing that technology.

Comment

Comment

A Northern Exposed iVoting Adventure

NorthernExposureImage
NorthernExposureImage

Alaska's extension to its iVoting venture may have raised the interests of at least one journalist for one highly visible publication.  When we were asked for our "take" on this form of iVoting, we thought that we should also comment here on this "northern exposed adventure." (apologies to those fans of the mid-90s wacky TV series of a similar name.) Alaska has been among the states that allow military and overseas voters to return marked absentee ballots digitally, starting with fax, then eMail, and then adding a web upload as a 3rd option.  Focusing specifically on the web-upload option, the question was: "How is Alaska doing this, and how do their efforts square with common concerns about security, accessibility, Federal standards, testing, certification, and accreditation?"

In most cases, any voting system has to run that whole gauntlet through to accreditation by a state, in order for the voting system to be used in that state. To date, none of the iVoting products have even trying to run that gauntlet.

So, what Alaska is doing, with respect to security, certification, and host of other things is essentially: flying solo.

Their system has not gone through any certification program (State, Federal, or otherwise that we can tell); hasn't been tested by an accredited voting system test lab; and nobody knows how it does or doesn't meet  federal requirements for security, accessibility, and other (voluntary) specifications and guidelines for voting systems.

In Alaska, they've "rolled their own" system.  It's their right as a State to do so.

In Alaska, military voters have several options, and only one of them is the ability to go to a web site, indicate their choices for vote, and have their votes recorded electronically -- no actual paper ballot involved, no absentee ballot affidavit or signature needed. In contrast to the sign/scan/email method of return of absentee ballot and affidavit (used in Alaska and 20 other states), this is straight-up iVoting.

So what does their experience say about all the often-quoted challenges of iVoting?  Well, of course in Alaska those challenges apply the same as anywhere else, and they are facing them all:

  1. insider threats;
  2. outsider hacking threats;
  3. physical security;
  4. personnel security; and
  5. data integrity (including that of the keys that underlie any use of cryptography)

In short, the Alaska iVoting solution faces all the challenges of digital banking and online commerce that every financial services industry titan and eCommerce giant spends big $ on every year (capital and expense), and yet still routinely suffer attacks and breaches.

Compared to the those technology titans of industry (Banking, Finance, Technology services, or even the Department of Defense), how well are Alaskan election administrators doing on their shoestring (by comparison) budget?

Good question.  It's not subject to annual review (like banks' IT operations audit for SAS-70), so we don't know.  That also is their right as a U.S. state.  However, the  fact that we don't know, does not debunkany of the common claims about these challenges.  Rather, it simply says that in Alaska they took on the challenges (which are large) and the general public doesn't know much about how they're doing.

To get a feeling for risks involved, just consider one point, think about the handful of IT geeks who manage the iVoting servers where the votes are recorded and stored as bits on a disk.  They arenot election officials, and they are no more entitled to stick their hands into paper ballots boxes than anybody else outside a county elections office.  Yet, they have the ability (though not the authorization) to access those bits.

  • Who are they?
  • Does anybody really oversee their actions?
  • Do they have remote access to the voting servers from anywhere on the planet?
  • Using passwords that could be guessed?
  • Who knows?

They're probably competent responsible people, but we don'tknow.  Not knowing any of that, then every vote on those voting servers is actually a question mark -- and that's simply being intellectually honest.

Lastly, to get a feeling for the possible significance of this lack of knowledge, consider a situation in which Alaska's electoral college votes swing an election, or where Alaska's Senate race swings control of Congress (not far-fetched given Murkowski's close call back in 2010.)

When the margin of victory in Alaska, for an election result that effects the entire nation, is a low 4-digit number of votes, and the number of digital votes cast is similar, what does that mean?

It's quite possible that those many digital votes could be cast in the next Alaska Senate race.  If the contest is that close again,  think about the scrutiny those IT folks will get.  Will they be evaluated any better than every banking data center investigated after a data breach?  Any better than Target?  Any better than Google or Adobe's IT management after having trade secrets stolen?  Or any better than the operators of military unclassified systems that for years were penetrated through intrusion from hackers located in China who may likely have been supported by the Chinese Army or Intelligence groups?

Probably not.

Instead, they'll be lucky (we hope) like the Estonian iVoting administrators, when the OCSE visited back in 2011 to have a look at the Estonian system.  Things didn't go so well.  OCSE found that one guy could have undermined the whole system.  Good news: it didn't happenCold comfort: that one guy didn't seem to have the opportunity -- most likely because he and his colleagues were busier than a one-armed paper hanger during the election, worrying about Russian hackers attacking again, after they had previously shut-down the whole country's Internet-connect government systems.

But so far, the current threat is remote, and it is still early days even for small scale usage of Alaska's iVoting option.  But while the threat is still remote, it might be good for the public to see some more about what's "under the hood" and who's in charge of the engine -- that would be our idea of more transparency.

Wandering off the Main Point for a Few Paragraphs So, in closing I'm going to run the risk of being a little preachy here (signaled by that faux HTML tag above); again, probably due to the surge in media inquiries recently about how the Millennial generation intends to cast their ballots one day.  Lock and load.

I (and all of us here) are all for advancing the hallmarks of the Millennial mandates of the digital age: ease and convenience.  I am also keenly aware there are wing-nuts looking for their Andy Warhol moment.  And whether enticed by some anarchist rhetoric, their own reality distortion field, or most insidious: the evangelism of a terrorist agenda (domestic or foreign) ...said wing nut(s) -- perhaps just for grins and giggles -- might see an opportunity to derail an election (see my point above about a close race that swings control of Congress or worse).

Here's the deep concern: I'm one of those who believes that the horrific attacks of 9.11 had little to do with body count or the implosions of western icons of financial might.  The real underlying agenda was to determine whether it might be possible to cause a temblor of sufficient magnitude to take world financial markets seriously off-line, and whether doing so might cause a rippling effect of chaos in world markets, and what disruption and destruction that might wreak.  If we believe that, then consider the opportunity for disruption of the operational continuity of our democracy.

Its not that we are Internet haters: we're not -- several of us came from Netscape and other technology companies that helped pioneer the commercialization of that amazing government and academic experiment we call the Internet.  Its just that THIS Internet and its current architecture simply was not designed to be inherently secure or to ensure anyone's absolute privacy (and strengthening one necessarily means weakening the other.)

So, while we're all focused on ease and convenience, and we live in an increasingly distributed democracy, and the Internet cloud is darkening the doorstep of literally every aspect of society (and now government too), great care must be taken as legislatures rush to enact new laws and regulations to enable studies, or build so-called pilots, or simply advance the Millennial agenda to make voting a smartphone experience.  We must be very careful and considerably vigilant, because its not beyond the realm of reality that some wing-nut is watching, cracking their knuckles in front of their screen and keyboard, mumbling, "Oh please. Oh please."

Alaska has the right to venture down its own path in the northern territory, but it does so exposing an attack surface.  They need not (indeed, cannot) see this enemy from their back porch (I really can't say of others).  But just because it cannot be identified at the moment, doesn't mean it isn't there.

One other small point:  As a research and education non-profit we're asked why shouldn't we be "working on making Internet voting possible?Answer: Perhaps in due time.  We do believe that on the horizon responsible research must be undertaken to determine how we can offer an additional alternative by digital means to casting a ballot next to absentee and polling place experiences.  And that "digital means" might be over the public packet-switched network.  Or maybe some other type of network.  We'll get there.  But candidly, our charge for the next couple of years is to update an outdated architecture of existing voting machinery and elections systems and bring about substantial, but still incremental innovation that jurisdictions can afford to adopt, adapt and deploy.  We're taking one thing at a time and first things first; or as our former CEO at Netscape used to say, we're going to "keep the main thing, the main thing."

Onward
GAM|out

Comment

Comment

PCEA Report Finally Out: The Real Opportunity for Innovation Inside

PCEACoverThis week the PCEA finally released its long-awaited report to the President.  Its loaded with good recommendations.  Over the next several days or posts we'll give you our take on some of them.  For the moment, we want to call your attention to a couple of under-pinning elements now that its done.

The Resource Behind the Resources

Early in the formation of what initially was referred to as the "Bauer-Ginsberg Commission" we were asked to visit the co-chairs in Washington D.C. to chat about technology experts and resources.  We have a Board member who knows them both and when asked we were honored to respond.

Early on we advised the Co-Chairs that their research would be incomplete without speaking with several election technology experts, and of course they agreed.  The question was how to create a means to do so and not bog down the progress governed by layers of necessary administrative regulations.

I take a paragraph here to observe that I was very impressed in our initial meeting with Bob Bauer and Ben Ginsberg.  Despite being polar political opposites they demonstrated how Washington should work: they were respectful, collegial, sought compromise to advance the common agenda and seemed to be intent on checking politics at the door in order to get work done.  It was refreshing and restored my faith that somewhere in the District there remains a potential for government to actually work for the people.  I digress.

We advised them that looking to the CalTech-MIT Voting Project would definitely be one resource they could benefit from having.

We offered our own organization, but with our tax exempt status still pending, it would be difficult politically and otherwise to rely on us much in a visible manner.

So the Chairs asked us if we could pull together a list -- not an official subcommittee mind you, but a list of the top "go to" minds in the elections technology domain.  We agreed and began a several week process of vetting a list that needed to be winnowed down to about 20 for manageability  These experts would be brought in individually as desired, or collectively  -- it was to be figured out later which would be most administratively expedient.  Several of our readers, supporters, and those who know us were aware of this confidential effort.  The challenge was lack of time to run the entire process of public recruiting and selection.  So, they asked us to help expedite that, having determined we could gather the best in short order.

And that was fine because anyone was entitled to contact the Commission, submit letters and comments and come testify or speak at the several public hearings to be held.

So we did that.  And several of that group were in fact utilized.  Not everyone though, and that was kind of disappointing, but a function of the timing constraints.

The next major resource we advised they had to include besides CalTech-MIT and a tech advisory group was Rock The Vote.  And that was because (notwithstanding they being a technology partner of ours) Rock The Vote has its ear to the rails of new and young voters starting with their registration experience and initial opportunity to cast their ballot.

Finally we noted that there were a couple of other resources they really could not afford to over-look including the Verified Voting Foundation, and L.A. County's VSAP Project and Travis County's StarVote Project.

The outcome of all of that brings me to the meat of this post about the PCEA Report and our real contribution.  Sure, we had some behind the scenes involvement as I describe above.  No big deal.  We hope it helped.

The Real Opportunity for Innovation

But the real opportunity to contribute came in the creation of the PCEA Web Site and its resource toolkit pages.

On that site, the PCEA took our advice and chose to utilize Rock The Vote's open source voter registration tools and specifically the foundational elements the TrustTheVote Project has built for a States' Voter Information Services Portal.

Together, Rock The Vote and the TrustTheVote Project are able to showcase the open source software that any State can adopt, adapt, and deploy--for free (at least the adoption part) and without having to reinvent the wheel by paying for a ground-up custom build of their own online voter registration and information services portal.

We submit that this resource on their PCEA web site represents an important ingredient to injecting innovation into a stagnant technology environment of today's elections and voting systems world.

For the first time, there is production-ready open source software available for an important part of an elections official's administrative responsibilities that can lower costs, accelerate deployment and catalyze innovation.

To be sure, its only a start -- its lower hanging fruit of an election technology platform that doesn't require any sort of certification. With our exempt status in place, and lots of things happening we'll soon share, there is more, much more, to come.  But this is a start.

There is a 112 pages of goodness in the PCEA report.  And there are some elements in there that deserve further discussion.  But we humbly assert its the availability of some open source software on their resource web site that we think represents a quiet breakthrough in elections technology innovation.

The news has been considerable.  So, yep, we admit it.  We're oozing pride today. And we owe it to your continued support of our cause. Thank you!

GAM | out

Comment

1 Comment

Comments Prepared for Tonight's Elections Technology Roundtable

This evening at 5:00pm members of the TrustTheVote Project have been invited to attend an elections technology round table discussion in advance of a public hearing in Sacramento, CA scheduled for tomorrow at 2:00pm PST on new regulations governing Voting System Certification to be contained in Division 7 of Title 2 of the California Code of Regulations. Due to the level of activity, only our CTO, John Sebes is able to participate.

We were asked if John could be prepared to make some brief remarks regarding our view of the impact of SB-360 and its potential to catalyze innovation in voting systems.  These types of events are always dynamic and fluid, and so we decided to publish our remarks below just in advance of this meeting.

Roundtable Meeting Remarks from the OSDV Foundation | TrustTheVote Project

We appreciate an opportunity to participate in this important discussion.  We want to take about 2 minutes to comment on 3 considerations from our point of view at the TrustTheVote Project.

1. Certification

For SB-360 to succeed, we believe any effort to create a high-integrity certification process requires re-thinking how certification has been done to this point.  Current federal certification, for example, takes a monolithic approach; that is, a voting system is certified based on a complete all-inclusive single closed system model.  This is a very 20th century approach that makes assumptions about software, hardware, and systems that are out of touch with today’s dynamic technology environment, where the lifetime of commodity hardware is months.

We are collaborating with NIST on a way to update this outdated model with a "component-ized" approach; that is, a unit-level testing method, such that if a component needs to be changed, the only re-certification required would be of that discrete element, and not the entire system.  There are enormous potential benefits including lowering costs, speeding certification, and removing a bar to innovation.

We're glad to talk more about this proposed updated certification model, as it might inform any certification processes to be implemented in California.  Regardless, elections officials should consider that in order to reap the benefits of SB-360, the non-profit TrustTheVote Project believes a new certification process, component-ized as we describe it, is essential.

2. Standards

2nd, there is a prerequisite for component-level certification that until recently wasn't available: common open data format standards that enable components to communicate with one another; for example, a format for a ballot-counter's output of vote tally data, that also serves as input to a tabulator component.  Without common data formats elections officials have to acquire a whole integrated product suite that communicates in a proprietary manner.  With common data formats, you can mix and match; and perhaps more importantly, incrementally replace units over time, rather than doing what we like to refer to as "forklift upgrades" or "fleet replacements."

The good news is the scope for ballot casting and counting is sufficiently focused to avoid distraction from the many other standards elements of the entire elections ecosystem.  And there is more goodness because standards bodies are working on this right now, with participation by several state and local election officials, as well as vendors present today, and non-profit projects like TrustTheVote.  They deserve congratulations for reaching this imperative state of data standards détente.  It's not finished, but the effort and momentum is there.

So, elections officials should bear in mind that benefits of SB-360 also rest on the existence of common open elections data standards.

3. Commercial Revitalization

Finally, this may be the opportunity to realize a vision we have that open data standards, a new certification process, and lowered bars to innovation through open sourcing, will reinvigorate a stagnant voting technology industry.  Because the passage of SB-360 can fortify these three developments, there can (and should) be renewed commercial enthusiasm for innovation.  Such should bring about new vendors, new solutions, and new empowerment of elections officials themselves to choose how they want to raise their voting systems to a higher grade of performance, reliability, fault tolerance, and integrity.

One compelling example is the potential for commodity commercial off-the-shelf hardware to fully meet the needs of voting and elections machinery.  To that point, let us offer an important clarification and dispel a misconception about rolling your own.  This does not mean that elections officials are about to be left to self-vend.  And by that we mean self-construct and support their open, standard, commodity voting system components.  A few jurisdictions may consider it, but in the vast majority of cases, the Foundation forecasts that this will simply introduce more choice rather than forcing you to become a do-it-yourself type.  Some may choose to contract with a systems integrator to deploy a new system integrating commodity hardware and open source software.  Others may choose vendors who offer out-of-the-box open source solutions in pre-packaged hardware.

Choice is good: it’s an awesome self-correcting market regulator and it ensures opportunity for innovation.  To the latter point, we believe initiatives underway like STAR-vote in Travis County, TX, and the TrustTheVote Project will catalyze that innovation in an open source manner, thereby lowering costs, improving transparency, and ultimately improving the quality of what we consider critical democracy infrastructure.

In short, we think SB-360 can help inject new vitality in voting systems technology (at least in the State of California), so long as we can realize the benefits of open standards and drive the modernization of certification.

 

EDITORIAL NOTES: There was chatter earlier this Fall about the extent to which SB-360 allegedly makes unverified non-certified voting systems a possibility in California.  We don't read SB-360 that way at all.  We encourage you to read the text of the legislation as passed into law for yourself, and start with this meeting notice digest.  In fact, to realize the kind of vision that leading jurisdictions imagine, we cannot, nor should not alleviate certification, and we think charges that this is what will happen are misinformed.  We simply need to modernize how certification works to enable this kind of innovation.  We think our comments today bear that out.

Moreover, have a look at the Agenda for tomorrow's hearing on implementation of SB-360.  In sum and substance the agenda is to discuss:

  1. Establishing the specifications for voting machines, voting devices, vote tabulating devices, and any software used for each, including the programs and procedures for vote tabulating and testing. (The proposed regulations would implement, interpret and make specific Section 19205 of the California Elections Code.);
  2. Clarifying the requirements imposed by recently chaptered Senate Bill 360, Chapter 602, Statutes 2013, which amended California Elections Code Division 19 regarding the certification of voting systems; and
  3. Clarifying the newly defined voting system certification process, as prescribed in Senate Bill 360.

Finally, there has been an additional charge that SB-360 is intended to "empower" LA County, such that what LA County may build they (or someone on their behalf) will sell the resulting voting systems to other jurisdictions.  We think this allegation is also misinformed for two reasons: [1] assuming LA County builds their system on open source, there is a question as to what specifically would/could be offered for sale; and [2] notwithstanding offering open source for sale (which technically can be done... technically) it seems to us that if such a system is built with public dollars, then it is in fact, publicly owned.  From what we understand, a government agency cannot offer for sale their assets developed with public dollars, but they can give it away.  And indeed, this is what we've witnessed over the years in other jurisdictions.

Onward.

1 Comment

Comment

Crowd Sourcing Polling Place Wait Times

Long lines at the polling place are becoming a thorn in our democracy. We realized a few months ago that our elections technology framework data layer could provide information that when combined with community-based information gathering might lessen the discomfort of that thorn.  Actually, that realization happened while hearing friends extol the virtues of Waze.  Simply enough, the idea was crowd-sourcing wait information to at least gain some insight on how busy a polling place might be at the time one wants to go cast their ballot.

Well, to be sure, lots of people are noodling around lots of good ideas and there is certainly no shortage of discussion on the topic of polling place performance.  And, we’re all aware that the President has taken issue with it and after a couple of mentions in speeches, created the Bauer-Ginsberg Commission.  So, it seems reasonable to assume this idea of engaging some self-reporting isn’t entirely novel.

After all, its kewl to imagine being able to tell – in real time – what the current wait time at the polling place is, so a voter can avoid the crowds, or a news organization can track the hot spots of long lines.  We do some "ideating" below but first I offer three observations from our noodling:

  • It really is a good idea; but
  • There’s a large lemon in it; yet
  • We have the recipe for some decent lemonade.

Here’s the Ideation Part

Wouldn’t it be great if everybody could use an app on their smarty phone to say, “Hi All, its me, I just arrived at my polling place, the line looks a bit long.” and then later, “Me again, OK, just finished voting, and geesh, like 90 minutes from start to finish… not so good,” or “Me again, I’m bailing.  Need to get to airport.”

And wouldn’t it be great if all that input from every voter was gathered in the cloud somehow, so I could look-up my polling place, see the wait time, the trend line of wait times, the percentage of my precinct’s non-absentee voters who already voted, and other helpful stuff?  And wouldn’t it be interesting if the news media could show a real time view across a whole county or State?

Well, if you’re reading this, I bet you agree, “Yes, yes it would."  Sure.  Except for one thing.  To be really useful it would have to be accurate.  And if there is a question about accuracy (ah shoot, ya know where this is going, don-cha?) Yes, there is always that Grinch called “abuse.”

Sigh. We know from recent big elections that apparently, partisan organizations are sometimes willing to spend lots of money on billboard ads, spam campaigns, robo-calls, and so on, to actually try to discourage people from going to the polls, within targeted locales and/or demographics. So, we could expect this great idea, in some cases, to fall afoul of similar abuse.  And that’s the fat lemon.

But, please read on.

Now, we can imagine some frequent readers spinning up to accuse us of wanting everything to be perfectly secure, of letting the best be the enemy of the good, and noting that nothing will ever be accomplished if first every objection must be overcome. On other days, they might be right, but not so much today.

We don’t believe this polling place traffic monitoring service idea requires the invention of some new security, or integrity, or privacy stuff.  On the other hand, relying on the honor system is probably not right either.  Instead, we think that in real life something like this would have a much better chance of launch and sustained benefit, if it were based on some existing model of voters doing mobile computing in responsible way that’s not trivial to abuse like the honor system.

And that lead us to the good news – you see, we have such an existing model, in real life. That’s the new ingredient, along with that lemon above, and a little innovative sugar, for the lemonade that I mentioned.

Stay tuned for Part 2, and while waiting you might glance at this.

Comment

6 Comments

For (Digital) Poll Books -- Custody Matters!

Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”).  For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote. For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability.  So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.

OSDV_pollbook_100709-1We've been hearing much lately about State and local election officials' appetite (or infatuation) for digital poll books.  We've been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book.  But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood.  And we think the gap shows why physical custody is so important—especially so for digital poll books.

The first two obvious purposes of a poll book are to [1] check in a voter as a prerequisite to obtaining a ballot, and [2] to prevent a voter from having a second go at checking-in and obtaining a ballot.  That's fine for meeting the "Eligibility" and "Non-duplication" requirements for in-person voting.

But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood.  In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting.  Bear in mind, those "delayed-cast" ballots can't be evaluated until after the post-election poll-book-intake process is complete.

To explain why, let's consider one fairly typical approach to absentee evaluation.  The poll book intake process results in an update to the voter record of every voter who voted in person.  Then, the voter record system is used as one part of absentee and provisional ballot processing.  Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record.  If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.

So far, that's a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).

The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added).  With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.

With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that's managed by the tablet, and that represents who voted) can be separated, although it should not.

Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message.  It's essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place.  Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected.  Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.

That's why digital poll books must be carefully designed for use by election staff in a way that doesn't endanger the integrity of the data.  And this is an example of the devil in the details that's so common for innovative election technology.

Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.

Marvelous simplicity, right?  Oh hec, I'm sure some grant-funded project could build that right away.  But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on.  You wouldn't put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.

A Side Note:  This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts.  They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today.  Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web.  And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices.  But there is far more to it, than a sexy interface with a 4G connection.  Oops, I digress to a tangential gripe.

This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in [1] the data integration (to and from the voter record system); [2] the data management (to and from each individual digital poll book), and [3] the data integrity (maintaining the same control present in paper poll books).

Without a doubt, the voter's user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we're gathering plenty of requirements and feedback based on our current work.  But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.

Thoughts? Your ball (er, ballot?) GAM | out

6 Comments

1 Comment

TrustTheVote on HuffPost

We'll be live on HuffPost online today at 8pm eastern:

  • @HuffPostLive http://huff.lv/Uhokgr or live.huffingtonpost.com

and I thought we should share our talking points for the question:

  • How do you compare old-school paper ballots vs. e-voting?

I thought the answers would be particularly relevant to today's NYT editorial on the election which concluded with this quote:

That the race came down to a relatively small number of voters in a relatively small number of states did not speak well for a national election apparatus that is so dependent on badly engineered and badly managed voting systems around the country. The delays and breakdowns in voting machines were inexcusable.

I don't disagree, and indeed would extend from flaky voting machines to election technology in general, including clunky voter record systems that lead to many of the lines and delays in polling places.

So the HuffPost question is apposite to that point, but still not quite right. It's not an either/or but rather a comparison of:

  • old-school paper ballots and 19th century election fraud;
  • old-school machine voting and 20th century lost ballots;
  • old-school combo system of paper ballots machine counting and botched re-counting;
  • new-fangled machine voting (e-voting) and 21st century lost ballots;
  • newer combo system of paper ballots and machine counting (not voting).

Here are the talking points:

  • Old-school paper ballots where cast by hand and counted by hand, where the counters could change the ballot, for example a candidate Smith partisan could invalidate a vote for Jones by adding a mark for Smith.
  • These and other paper ballot frauds in the 19th century drove adoption in the early 20th century of machine voting, on the big clunky "level machines" with the satisfying ka-thunk-swish of the level recording the votes and opening the privacy curtain.
  • However, big problem with machine voting -- no ballots! Once that lever is pulled, all that's left is a bunch of dials and counters on the backside being increased by one. In a close election that requires a re-count, there are no ballots to examine! Instead the best you could do is re-read each machine's totals and re-run the process of adding them all up in case there was an arithmetic error.
  • Also, the dials themselves, after election day but before a recount, were a tempting target for twiddling, for the types of bad actors who in the 19th century fiddled with ballot boxes.
  • Later in the 20th century, we saw a move to a combo system of paper ballots and machine counting, with the intent that the machine counts were more accurate than human counts and more resistant to human meddling, yet the paper ballots remaining for recounts, and for audits of the accuracyof machinery of counting.
  • Problem: these were the punch ballots of the infamous hanging chad.
  • Early 21st century: run from hanging chad to electronic voting machines.
  • Problem: no ballots! Same as before, only this time, the machins are smaller and much easier to fiddle with. That's "e-voting" but wihout ballots.
  • Since then, a flimsy paper record was bolted on to most of these systems to support recount and audit.
  • But the trend has been to go back to the combo system, this time with durable paper ballots and optical-scanning machinery for counting.
  • Is that e-voting? well, it is certainly computerized counting. And the next wave is computer-assisted marking of paper ballots -- particularly for voters with disabilities -- but with these machine-created ballots counted the same as hand-marked ballots.

Bottom line: whether or not you call it e-voting, so long as there are both computers and human-countable durable paper ballots involved, the combo provides the best assurance that niether humans nor computers are mis-counting or interfering with voters casting ballots.

-- EJS

PS: If you catch us on HP online, please let us know what you thought!

1 Comment

Comment

Election Tech "R" Us - and Interesting Related IP News

Good Evening-- On this election night, I can't resist pointing out the irony of the USPTO's news of the day for Election Day earlier: "Patenting Your Vote," a nice little article about patents on voting technology.  It's also a nice complement to our recent posting on the other form of intellectual property protection on election technology -- trade secrets.  In fact, there is some interesting news of the day about how intellectual property protections won't (as some feared) inhibit the use of election technology in Florida.

For recent readers, let's be clear again about what election technology is, and our mission. Election technology is any form of computing -- "software 'n' stuff" -- used by election officials to carry out their administrative duties (like voter registration databases), or by voters to cast a ballot (like an opscan machine for recording votes off of a paper ballot), or by election officials to prepare for an election (like defining ballots), or to conduct an election (like scanning absentee ballots), or to inform the public (like election results reporting). That covers a lot of ground for "election technology."

With the definition, it's reasonable to say that "Election Technology 'R' Us" is what the TrustTheVote Project is about, and why the OSDV Foundation exists to support it.  And about intellectual property protection?   I think we're clear on the pros and cons:

  • CON: trade secrets and software licenses that protect them. These create "black box" for-profit election technology that seems to decrease rather than increase public confidence.
  • PRO: open source software licenses. These enable government organizations to [A] adopt election technology with a well-defined legal framework, without which the adoption cannot happen; and [B] enjoy the fruits of the perpetual harvest made possible by virtue of open source efforts.
  • PRO: patent applications on election technology.  As in today's news, the USPTO can make clear which aspects of voting technology can or can't be protected with patents that could inhibit election officials from using the technology, or require them to pay licensing fees.
  • ZERO SUM: granted patents on techniques or business processes (used in election administration or the conduct of elections) in favor of for-profit companies.  Downside: can increase costs of election technology adoption by governments. Upside: if the companies do have something innovative, they are entitled to I.P. protection, and it may motivate investment in innovation.  Downside: we haven't actually seen much innovation by voting system product vendors, or contract software development organizations used by election administration organizations.
  • PRO: granted patents to non-profit organizations.  To the extent that there are innovations that non-profits come up with, patents can be used to protect the innovations so that for-profits can't nab the I.P., and charge license fees back to governments running open source software that embodies the innovations.

All that stated, the practical upshot as of today seems to be this: there isn't much innovation in election technology, and that may be why for-profits try to use trade secret protection rather than patents.

That underscores our practical view at the TrustTheVote Project: a lot of election technology isn't actually hard, but rather simply detailed and burdensome to get right -- a burden beyond the scope of all but a few do-it-ourself elections offices' I.T. groups.

That's why our "Election Technology 'R' Us" role is to understand what the real election officials actually need, and then to (please pardon me) "Git 'er done."

What we're "getting done" is the derivation of blue prints and reference implementations of an elections technology framework that can be adopted, adapted, and deployed by any jurisdiction with common open data formats, processes, and verification and accountability loops designed-in from the get-go.  This derivation is based on the collective input of elections experts nationwide, from every jurisdiction and every political process point of view.  And the real beauty: whereas no single jurisdiction could possibly ever afford (in terms of resources, time or money) to achieve this on their own, by virtue of the collective effort, they can because everyone benefits -- not just from the initial outcomes, but from the on-going improvements and innovations contributed by all.

We believe (and so do the many who support this effort) that the public benefit is obvious and enormous: from every citizen who deserve their ballots counted as cast, to every local election official who must have an elections management service layer with complete fault tolerance in a transparent, secure, and verifiable manner.

From what we've been told, this certainly lifts a load of responsibility off the shoulders of elections officials and allows it to be more comfortably distributed.  But what's more, regardless of how our efforts may lighten their burden, the enlightenment that comes from this clearinghouse effect is of enormous benefit to everyone by itself.

So, at the end of the day, what we all benefit from is a way forward for publicly owned critical democracy infrastructure.  That is, that "thing" in our process of democracy that causes long lines and insecurities, which the President noted we need to fix during his victory speech tonight.  Sure, its about a lot of process.  But where there will inevitably be technology involved, well that would be the TrustTheVote Project.

GAM|out

Comment

Comment

At the Risk of Running off the Rails

So, we have a phrase we like to use around here borrowed from the legal academic world.  Used to describe an action or conduct in analyzing a nuance in tort negligence, is the phrase "frolic and detour."  I am taking a bit of detour and frolicking in an increasingly noisy element of explaining the complexity of our work here.  (The detour comes from the fact that as "Development Officer" my charge is ensuring the Foundation and projects are financed, backed, supported, and succeed in adoption.  The frolic is in the form of commentary below about software development methodologies although I am not currently engaged or responsible for technical development outside of my contributions in UX/UI design.)  Yet, I won't attempt to deny that this post is also a bit of promotion for our stakeholders -- elections IT officials who expect us to address their needs for formal requirements, specifications, benchmarks, and certification, while embracing the agility and speed of modern development methodologies. This post was catalyzed by chit-chat at dinner last evening with an energetic technical talent who is jacked-up about the notion of elections technology being an open source infrastructure.  Frankly, in 5 years we haven't met anyone who wasn't jacked-up about our cause, and their energy is typically around "damn, we can do this quick; let's git 'er done!"  But it is about at this point where the discussion always seems to get a bit sideways.  Let me explain.

I guess I am exposing a bit of old school here, but having had the formal training in computer systems science and engineering (years ago) I believe data modeling -- especially for database-backed enterprise apps -- is an absolute imperative priority.  And the stuff of elections systems is serious technology, containing a significant degree of fault tolerance, integrity and verification assurance, and perhaps most important a sound data model.  And modeling takes time and requires documentation, both of which are nearly antithetical in today's pop culture of agile development.

Bear in mind, the TTV Project embraces agile methods for UX/UI development efforts. And there are a number of components in the TTV elections technology framework that do not require extensive up-front data modeling and can be developed purely in an iterative environment.

However, we claim that data modeling is critical for certain enterprise-grade elections applications because (as many seasoned architects have observed): [a] the data itself has meaning and value outside of the app that manipulates it, and [b] scalability requires a good DB design because you cannot just add in scalability later.  The data model or DB design defines the structure of the database and the relationships between the data sets; it is, in essence the foundation on which the application(s) are built.   A solid DB design is essential to achieve a scalable application.  Which leads to my lingering question:  How do agile development shops design a database?

I've heard the "Well, we start with a story..." approach.  And when I ask those who I really respect as enterprise software architects with real DB design chops, who also respect and embrace agile methodologies, they tend to express reservations about the agile mindset being boorishly applied to truly scalable, enterprise grade relational DB design that results in a well performing application, and related data integrity.

Friends, I have no intention of hating on agile principles of lightweight development methods -- they have an important role in today's application software development space and an important role here at the Foundation, but at the same time, I want to try to explain why we cannot simply just "bang out" new elections apps for ballot marking, tabulation, or ballot design and generation in a series of sprints and scrums.

First, in all candor, I fear this confusion rests in the reality that fewer and fewer developers today have had a complete computer science education, and cannot really claim to be disciplined software engineers or architects.  Many (not all) have just "hacked" with, and self-taught themselves, development tools because they built a web site or implemented a digital shopping bag for a friend (much like the well intentioned developer my wife and I met last evening).

Add in the fact, the formality and discipline of compiled code has given way to the rapid prototyping benefits of interpreted code.  And in the processes of this new modern training in software development (almost exclusively for the sandbox of the web browser as the UX/UI vehicle) what has been forgotten is that data modeling exists not because it creates overhead and delays, but because it removes such impediments.

Look at this another way.  I like to use building analogies -- perhaps because I began my collegiate studies long ago in architectural engineering before realizing that computer graphics would replace drafting.  There is a reason we spend weeks, sometimes months traveling by large holes in the ground with towers of re-bar, forms, and concrete pouring without any clue of what really will stand there once finished.  And yet, later as the skyscraper takes form, the speed with which it comes together seems to accelerate almost weekly.  Without that foundation carefully laid, the building cannot stand for any extended period of time, let alone bear the dynamic and static weights of its appointments, systems, and occupants.  So too, is this the case with complex, highly scalable, fault tolerant enterprise software -- without the foundation of a sold data model, the application(s) will never be sustainable.

I admit that I have been out of production grade software development (i.e., in the trenches coding, compiling; link, load, dealing with lint and running in debug mode) for years, but I can still climb on the bike and turn the pedals.  The fact is, data flow and data model could not be more different.  The former cannot exist without the latter.  It was well understood and data modeling has demonstrated many times that one cannot create a data flow out of nothing.  There has to be a base model as a foundation of one or more data flows, each mapping to its application.  Yet in our discussion punctuated by a really nice wine and great food, this developer seemed to want to dismiss modeling as something that can be done later... perhaps like refactoring (!?)

I am beginning to believe this fixation of modern developers with "rapid" non-data-model development is misguided, if not dangerous for its latent time shifted costs.

Recently, a colleague at another Company was involved with the development of a system where no time whatsoever was spent on data model design.  Indeed, the screens started appearing in record time.  The UX/UI was far from complete, but usable.  And the team was cheered as having achieved great "savings" in the development process.  However, when it came time to expand and extend the app with additional requirements, the developers waffled and explained they would have to recode the app in order to meet the new process requirements.  The data was unchanged, but processes were evolving.  The balance of the project ground to a halt in the dismissal of the first team over arguments about why requirements planning up front should have been done, and they figured out who to hire in to solve  it.

I read somewhere of another development project where the work was getting done in 2 week cycles. They were about 4 cycles away from finishing when on the tracker schedule a task called "concurrency" appeared for the next to last (penultimate) cycle.  The project subsequently imploded because all of the code had to be refactored (a core entity actually was determined to be two entities.)  Turns out that no upfront modeling led to this sequence of events, but unbelievably, the (agile) Development Firm working on the project, spun this as a "positive outcome;" that is they explained, "Hey, its a good thing we caught this a month before go-live."  Really?  Why wasn't that caught before that pungent smell of freshly cut code started wafting through the lab?

Spin doctoring notwithstanding, the scary thing to me is that performance and concurrency problems caused by a failure to understand the data are being caught far too late in the Agile development process, which makes it difficult if not impossible to make real improvements.  In fact, I fear that many agile developers have the misguided principle that all data models should be:

create table DATA
 (key INTEGER,
 stuff BLOB);

Actually, we shouldn't joke about this.  That idea comes from a scary reality: a DBA (database architect) friend tells about a development team he is interacting with on an outsourced State I.T. project that has decided to migrate a legacy non-Oracle application to Oracle using precisely this approach.  Data that had been stored as records in old ISAM type files, will be stored in Oracle as byte sequences in Blobs, with an added surrogate generated unique primary key.  When he asked what's the point of that approach, no one at the development shop could give him a reasonable answer other than "in the time frame we have, it works."   It begs the question: What do you call an Oracle Database where all the data in it is invisible to Oracle itself and cannot be accessed and manipulated directly using SQL?  Or said differently, would you call a set of numbered binary records a "database," or just "a collection of numbered binary records?"

In another example of the challenges of agile development in a database-driven app world, a DBA colleague describes being brought in on an emergency contract basis to an Agile project under development on top of Oracle, to deal with "performance problems" in the database.   Turns out the developers were using Hibernate and apparently relied on it to create their tables on an as-needed basis, simply adding a table or a column in response to incoming user requirements and not worrying about the data model until it crawled out of the code and attacked them.

This sort of approach to app development is what I am beginning to see as "hit and run."  Sure, it has worked so far in the web app world of start-ups: get it up and running as fast as possible, then exit quickly and quietly before they can identify you as triggering the meltdown when scale and performance start to matter.

After chatting with this developer last evening (and listening to many others over recent months lament that we're simply moving too slowly) I am starting to think of Agile development as a methodology of "do anything rather than nothing, regardless of whether its right."  And this may be to support the perception of rapid progress: "Look, we developed X components/screens/modules in the past week."  Whether any of this code will stand up to production performance environments is to be determined later.

Another Agile principle is of incremental development and delivery.   It's easy for a developer to strip out a piece of poorly performing code and replace it with a chunk that offers better or different capabilities.  Unfortunately, you just cannot do this in a Database.  For example: you cannot throw away old data in old tables and simply create new empty tables.

The TrustTheVote Project continues to need the kind of talent this person exhibited last evening at dinner.  But her zeal aside (and obvious passion for the cause of open source in elections), and at the risk of running off the (Ruby) rails here, we simply cannot afford to have these problems happen with the TrustTheVote Project.

Agile methodologies will continue to have their place in our work, but we need to be guided by some emerging realities, and appreciate that for as fast as someone wants to crank out a poll book app or a ballot marking device, we cannot afford to short-cut simply for the sake of speed.  Some may accuse me of being a waterfall Luddite in an agile world; however, I believe there has to be some way to mesh these things, even if it means requirements scrums, data modeling sprints, or animated data models.

Cheers GAM|out

Comment

3 Comments

An Independence Holiday Reflection: IP Reform and Innovation in Elections Technology

On this Independence Day I gave some reflection to the intentions of our founding fathers, and how that relates to our processes of elections and the innovations we should strive for to ensure accuracy, transparency, verification, and security.  And as I thought about this more while gazing out at one of the world’s most precious natural resource treasures and typing this post, it occurred to me that innovation in elections systems is largely around the processes and methods more than any discrete apparatus. That’s when the old recovering IP lawyer in me had an “ah ha” moment.   And that’s what this long-winded post is about—something that actually should matter to you, a reader of this forum about our on-going effort to make elections and voting technology critical democracy infrastructure.

You see, in America, innovation has long been catalyzed by intellectual property law, specifically patents.

And as you probably also know, patent law is going through major reform efforts in Congress as you read this.  Now here is what you may have missed, which in my reflecting on this Fourth of July holiday, the efforts of the TrustTheVote Project, and innovations in voting technology, dawned on me: there is a bad ingredient to the current patent reform legislation that threatens to not only undermine the very foundations on which patent law is used to catalyze innovation, but equally has the potential to undermine some very basic ideals our founding fathers had in mind as this nation was born.  Bear with me while I unravel this for you; I think it will grab your attention.

So it starts with Members of Congress debating patent reform through the America Invents Act (H.R. 1249).  You see, few may be aware of the role that business method patents (BMPs) play in the political process, especially during elections.  BMPs have been used to protect innovations designed to improve the operation of the political process.   And it is not unreasonable to assume that the TrustTheVote Project itself is working on innovations that should well qualify for patent protection resulting in patents we would assign ownership in to the general public.  Weakening the protection for such innovations may in turn reduce the motivation for companies and individuals to continue innovating in these technologies.  And it certainly could impact our work as well.  But this is exactly what Section 18 of H.R.1249, the America Invents Act of 2011, as currently drafted would likely do.

There is a long history of inventors using BMPs to protect their innovations related to voting systems.  As such systems have developed, from paper voting, to electronic voting, to on-line voting, companies both large and small have continued to innovate, and to protect their new technologies via the patent system, often through the use of BMPs.  Let’s look at just two major areas.

  1. Electronic Voting Systems - it is estimated that between 20% and 30% of American voters now cast their ballots electronically, chiefly via Direct Recording Electronic (DRE) systems. Yet these systems have encountered many problems related to their ability to record votes accurately, verifiably, and securely.  In effort to remedy these problems (but largely to no demonstrative gain), companies have developed technologies designed to overcome these shortcomings, and have protected these technologies with a series of patents, many of which are classed as BMPs. Organizations with numerous BMPs related to improving electronic voting systems include large companies such as IBM, Accenture and Pitney Bowes, and smaller specialist companies such as Hart InterCivic, Avante International and Smartmatic.
  2. Internet Voting Systems – in DRE systems, voters typically have to be physically present at a polling station in order to cast their ballots.  The next logical progression is for voters to cast their ballots remotely, for example via the Internet.   For reasons repeatedly explained here and elsewhere, this is just not a good idea given today’s “Internet.”  But in any event, such ill-advised efforts require a whole new level of network security, in order to ensure that the votes are recorded both accurately and in a verifiable fashion (both being extremely difficult to do, and its unclear any system exists patented or not that can do so, but bear with me for the sake of argument).   A search of the patents in this area, however, reveals that companies such as Accenture, Hart InterCivic, Scytl, and Avante have BMPs describing so-called Internet voting.  These BMPs sit alongside their earlier BMPs covering DRE systems, as these companies develop successive generations of voting technology.

In short, Companies are continuing to seek patent protections for innovations in this sector and business methods continue to be a vehicle for so doing.

Section 18 of H.R.1249, the America Invents Act of 2011, aims to give one special interest—banks—a “get out of jail card.”  As I read it, the provision does this:  If you sue a bank for infringement of a business method patent the bank can stay the court litigation and take your patent to the USPTO for a special post-grant review or PGR process.  If the Bank loses the first round at the office they have an automatic appeal to the board in the office.  If they lose they have another automatic appeal to the Court of Appeals for the Federal Circuit (CAFC), the sole circuit or appeals court for patent cases.  This process takes between 4-7 years based on the existing reexamination systems at the office.  This process is special in that the bank can bring in additional forms of prior art not permitted for other reexamination systems.

There is a good reason why the range of prior art that can be used in court to challenge a patent is not available in the office.  A judge, jury, rules of evidence, cross-examination and other time-tested features of court do not exist at the Patent and Trademark Office.  A patent examiner does not have the experience, procedures, institutional knowledge or time to ascertain the veracity or fraud of the prior art.  More importantly, they do not have the resources to deal with the increased volume of art. Worse, Section 18 can be conducted regardless of whether the patent has already been deemed valid in a prior proceeding.  

And on this Independence Day Holiday, it occurs to me this violates separation of powers and should be unconstitutional.

Before I explain how I can envision this impacting what we’re doing, let me state that I will not delve into the debate over BMPs because it devolves into a religious war, and one that I as both a computer scientist and an IP Lawyer have actually shifted view points from on side to the other over the years.  But suffice it to say that there are examples of useful business method patents that would be eliminated by Section 18 of the patent reform legislation winding its way through Congress.  We are all very familiar with one example: SSL.  Indeed, secure sockets layer is comprised of two BMPs.  Everyone in the world, each day, touches this patented innovation.  If Section 18 were law in 1995 then Visa and Mastercard with their SET proposal could have stalled Netscape and SSL in the USPTO for many years.  Microsoft and CommerceNet with SHTTP could have done the same.  The world would be worse off with competing security protocols. Ecommerce itself may not have taken off at all; at the very least its growth would have been stunted.

It is worth noting that since about year 2000 the U.S.P.T.O has employed a "second pair of eyes" process to examine BMP applications twice.  Moreover, given the public acrimony over BMPs, the U.S.P.T.O is very slow to grant BMPs and the allowance rate is 20% lower than other art areas.  And recently the Supreme Court in their Bilski decision affirmed the patentability of BMPs.

Yet, in spite of the acrimony and higher threshold to get a BMP, many companies large and small innovate and invest in BMPs.  The top 20 owners of BMPs are Fortune 100 companies and or respectable startups. Non-practicing entities comprise a very small portion of the ownership pool of BMPs.  And in considering the innovations resident in the open source elections technology framework we’re developing, we too may find ourselves in the middle of the BMP and Section 18 crossfire.

The challenge is, as a non-profit (pending 501.c.3) organization, we cannot and do not engage in the political process of legislation or lobbying.  Yet, we’re wary of where this is going, I think you should be too.  You see, policy makers don't often have the time to consume, absorb and digest the data.  They prefer anecdotes, headline grabbing stories, one-page summaries, and talking points.

So let me turn to our thinking about BMPs and the impact of Section 18.

As mentioned above, without debating the basis for BMPs we at the TrustTheVote Project have come to accept that they are an essential part of technology IP.  One reason is that the scope for IT innovation far exceeds the scope for inventing new technology, and includes innovation in the use of existing technology for new purposes.  That's been increasingly true for some 20 years, with the scope of the online world coming to encompass so many areas of human activity.  One of the more recent advances is the use of IT innovations for public benefit.  I'll explain that in terms of elections and political activity, but first let me give a general idea and one specific existing example.

In our experience with IT IP, a BMP can be used as a way to make a claim that "X has been used for many things before, but not in the area of Y; here is a way to use X for a particular purpose in the area of Y; this enables a new human activity Z."  Now, I could forgo that claim and limit myself to a claim about Y-inspired extension of X that might be a sufficiently significant extension to warrant a patent for a technical innovation; or it might not.

If I limited myself that way, then another party could claim the innovation of using that new method for a particular purpose Z.  So in general, I want to claim both, to protect the right to use X in Y for Z.

Here’s a big idea: "Protect" in the public benefit world means "anyone can do so, not limited by a private or for profit IP holder." That applies whether or not my extension of prior X is sufficiently innovative by itself.

As an example of this idea, let’s return to SSL, the subject of very well known and high quality BMPs.  When SSL was invented, the use of cryptography for communication security was already well established, including the use of digital certificates to establish (a chain of) trust in the identity of parties communicating.  In fact, there were many examples of cryptographic protocols and communication protocols.  So for X, let's say "use of cryptographic protocols and communication protocols together for communication with security properties."

Now, SSL as a protocol may well have been sufficiently innovative to warrant patents of algorithms. But whether or not that was true, SSL was used for several purposes, including a particular kind of communication in which one party trusts a third to vouch for the second party's identity as being sufficiently established for a financial transaction. That's Y.  Z is "digital commerce" meaning financial transactions performed as part of exchange in which one party pays another party for goods and services – including digital goods and digital services.  Without X used for Y, digital commerce wouldn't exist, and many forms of digital services and digital goods simply would not be provided. With X used for Y, Z is enabled for the first time.  And I view Z -- digital commerce -- as a major public benefit, even if it was primarily for private for-profit commercial transactions.

The public benefit is a larger economy with the addition of digital commerce.

So far so good, but let's revisit the value of the BMP.  If it didn't exist, the holders of patents for X could effectively block Z, or prevent intermediation and insert themselves into every use of X in Y for Z or X in A for B -- any use. For example, IBM holds many patents on cryptographic protocols.  I don't know if those protocols and patents were sufficiently broad to cover the SSL protocol as an algorithm or apparatus.  But if that were so, and BMPs didn't exist, then IBM could have insisted that it be a party to every digital commerce transaction, only allowing transaction services by parties that made payments to IBM on terms dictated by IBM.  Any other parties would be barred from digital commerce.  Of course, that public benefit may be a matter of opinion on which many people would differ.

In elections and politics, public benefit may be clearer.

For a first example, consider technical innovations for online voter registration. Such innovations might include the use of a "forms wizard" to help people follow complicated rules for filling out voter registration forms; digital means for capturing a signature for the form; digital transmission of the form itself, or its data; and there are more.  All these techniques have been invented before and used in other areas of human endeavor. Adapting them for use in voter registration is probably not an adaption that qualifies as an innovation. But if one wants to ensure that the public be able to use IT implementations of online voter registration, a BMP can cover the use of forms wizards (or other X) for online voter registration (Y) to enable more rapid and more widespread ability of citizens to vote (Z).  Many people would definitely regard that as a public benefit.  The BMP protects that benefit when the BMP holder permits anyone to use the business process, barring a patent holder for X (the specific IT technique) from claiming that online VR implementations infringe their patent.

I don't know who, if anyone, holds a patent relevant to the technology of the types of innovation in online VR that I refer to here.  However, I suspect that many would regard it as a public detriment for citizens to have to pay a for-profit company for the right to use an online VR service; or for local or state governments to have to pay for privilege of operating such a service.

Other examples lie in the activities around political campaigns to form communities of supporters, organize volunteers, raise money, etc.  The use of social media and other online technology has and I expect will continue to increase in use, enabling more citizens to more easily participate in the political process. As in elections technology, such innovation is often the application of established technology for new purpose.

BMPs can protect the right of political organizations to use such established technology.  I can easily imagine a PAC or other issues-based political organization building a membership organization that includes online interaction with members, including gaining and retaining credit card information for future contributions to the organization, or directly to a candidate or campaign. If I were a member of such an organization, I might expect to get an email about a new set of candidate reviews for candidates in an upcoming election.  I could go to the organization's web site and read up on candidates.  I could choose to make a donation directly to the candidate's campaign, immediately, with a single click of a "give $100" button in the candidate review.

Suppose that there were a private company with a patent on making payment in digital commerce using a similar method.  Without a BMP for the process of a citizen contributing to a campaign as part of a Web session with a web site of an issues based voluntary membership association, that patent holder could insist that it be the sole conduit of such contributions.  I suspect most people would view it as a public detriment to either pay a for-profit company for the privilege of a quick and easy campaign contribution, or use a more cumbersome and error prone method for free.

Worse, one could imagine selective enforcement of the patent, or selectively preferential licensing agreements, to make the quick and easy contribution method available only to political campaigns that the patent holder favored.

The same selective approach could be applied to any part of the political process.  Back to voter registration, it's possible that a patent holder would choose to license its innovations selectively, only to those local election officials in locales where the majority of unregistered voters are perceived as friendly to the politics of the patent holder.

A selective approach could also be applied for disputes.  For example, if a financial transactions company were able to stop a political campaign for collecting online contributions in a certain manner, during the time in which the dispute is resolved. If the time frame stretches long enough, it doesn't matter if the campaign wins the dispute—the election will already be over and the opportunity to raise and use funds will be gone.

And these types of scenarios could fit pretty much any use of social media technology, where a patent holder of a purely technical patent could assert the right to constrain the use of the technique in any field of human activity, including elections or politics.

These examples may be fanciful, or not based on a real scenario where an election-relevant or politics-relevant technology-using process is the subject of a BMP that involves a particular use of a particular underlying technology for enabling or automating the process.  But I believe that the general benefit of BMPs would apply to real cases.

This may be a new idea—organizations with a public-benefit motivation wanting to ensure general use of technology-enabled innovations in electoral or political processes, rather than trying to control or reserve or profit from BMPs.  And it is certainly not what BMPs might have been intended for.  But I believe that BMPs could be used—and for all I know are already being used—for electoral or political processes.  It would be a shame, and a public detriment, if BMPs became less useful, either in general, or less useful in disputes with a particular class of organizations. This might be counter intuitive, but as we see the growth of digital democracy, open government, online activism, and the like, it shouldn't come as a surprise that these new forms of technology-enabled human activity also create new uses for pre-existing IP protections that pre-date the existence of these evolving activities.

Setting aside the efficacy of BMPs and the related religious debates, I bet we can all agree that without BMPs, Goliath—IBM in my perhaps fanciful example above—can block the public, especially the little guy.  Section 18 in the patent bill gives banks a new tool, unique for banks, to stop David from getting their idea to the market.  And this troubles me for it moves us toward that proverbial slippery slope.

At the end of the day, Section 18 of H.R.1249, the America Invents Act of 2011 is frankly, akin to a government regime not granting a permit to open a business simply because one is from the wrong caste or religion or political party... and that's not the government regime of this nation, who independence we celebrate today. Yet it appears some special interests in patent reform may have an otherwise misguided view to the contrary.

Your ball GAM|out

3 Comments

Comment

Voting System (De)certification - A Way Forward? (2 of 2)

Yesterday I wrote about the latest sign of the downward spiral of the broken market in which U.S. local election officials (LEOs) purchase product and support from vendors of proprietary voting system products, monolithic technology the result of years' worth of accretion, and costing years and millions to test and certify for use -- including a current case where the process didn't catch flaws that may result in a certified product being de-certified, and being replaced by a newer system, to the cost of LEOs. Ouch! But could you really expect a vendor in this miserable market to give away new product that they spent years and tens of millions develop, to every customer of the old product, who the vendor had planned to sell upgrades to? -- just because of flaws in the old product? But the situation is actually worse: LEOs don't actually have the funding to acquire a hypothetical future voting system product in which the vendor was fully open about true costs including

(a) certification costs both direct (fees to VSTLs) and indirect cost (staff time), as well as

(b) costs of development including rigorously designed and documented testing.

Actually, development costs alone are bad enough, but certification costs make it much worse -- as well as creating a huge barrier to entry of anyone foolhardy enough to try to enter the market (or even stay in it!) and make a profit.

A Way Forward?

That double-whammy is why I and my colleagues at OSDV are so passionate about working to reform the certification process, so that individual components can be certified for far less time and money than a mess o'code accreted over decades, and including wads of interwoven functionality that might need even need to be certified! And then of course, these individual components could also be re-certfied for bug fixes by re-running a durable test plan that the VSTL created the first time around.  And that of course requires common data formats for inter-operation between components -- for example, between a PCOS device and a Tabulator system that combines and cross checks all the PCOS devices' outputs, in order to either find errors/omissions or find a complete election result.

So once again our appreciation to NIST, EAC, IEEE 1622 for actually doing the detailed work of hashing out these common data formats, which is the bedrock of inter-operation, which is the pre-req for certification reform, which enables certification cost reduction of certification, which might result in voting system component products being available at true costs that are affordable to the LEOs who buy and use them.

Yet's that's quite a stretch, from data standards committee work, to a less broken market that might be able to deliver to customers at reasonable cost. But to replace a rickety old structure with a new, solid, durable one, you have to start at the bedrock, and that's where we're working now.

-- EJS

PS: Thanks again to Joe Hall for pointing out that the current potential de-certification and mandatory upgrade scenario (described in Part 1) illustrates the untenable nature of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to (as some have suggested) forego revenue when otherwise for-pay upgrades are required because of defects in software.

Comment

Comment

Voting System (De)certification - Another Example of the Broken Market (1 of 2)

Long-time readers will certainly recall our view that the market for U.S. voting systems is fundamentally broken. Recent news provides another illustration of the downward spiral: the likely de-certification of a widely used voting system product from the vendor that owns almost three quarters of the U.S. market. The current stage of the story is that the U.S. Election Assistance Commission is formally investigating the product for serious flaws that led to errors of the kind seen in several places in 2010, and perhaps best documented in Cuyahoga County. (See:  "EAC Initiates Formal Investigation into ES&S Unity 3.2.0.0 Voting System".) The likely end result is the product being de-certified, rendering it no longer legal for use in many states where it is currently deployed. Is this a problem for the vendor? Not really. The successor version of the product is due to emerge from a lengthy testing and certification process fairly soon. Having the current product banned is actually a great tool for migrating customers to the latest product!

But at what cost to who? The vendor will charge the customers (local election officials, or LEOs) for the new product, the same as would have been if the migration were voluntary and the old product version still legal. The LEOs will have to sign and pay for a multi-year service agreement. And they will have the same indirect costs of staff efforts (at the expense of other duties like running elections, or getting enough sleep to run an election correctly), and direct costs for shipping, transportation, storage, etc. These are real costs! (Example: I've heard reports of some under-funded election officials opting to not use election equipment that they already have, because they have no funding for the expense of taking out of the warehouse to testing facility, and doing the required pre-election testing.)

Some observers have opined that vendors of flawed voting system products should pay: whether damages, or fines, or doing the migration gratis, or something. But consider this deeper question, from UCB and Princeton's Joe Hall:

Can this market support a regulatory/business model where vendors can't charge for upgrades and have to absorb costs due to flaws that testing and certification didn't find? (And every software product, period, has them).

The funding for a high level of quality assurance has to come from somewhere, and that's not voting system customers right now. Perhaps we're getting to the point where the amount of effort it takes to produce a robust voting system and get it certified -- at the vendor's expense -- creates a cost that customers are not willing or able to pay when the product gets to market.

A good question! and one that illustrates the continuing downward spiral of this broken market. The cost to to vendors of certification is large, and you can't really blame a vendor for the sort of overly rapid development, marketing, and sales that leads to the problems being investigated. The folks are in this business to make a profit for heavens' sake, what else could we expect?

-- EJS

PS - Part Two, coming soon: a way out of the spiral.

Comment

Comment

2011: A Look Ahead; Another Glance Back

As Greg said in his New Year's posting, we've been planning a variety of activities for 2011, and reflecting on what we did in 2010, much that remains to do, and to do better. But at the risk of boring you with a laundry list, I wanted to provide some additional detail on some of the 2010 activities that Greg mentioned. Many of the items listed below serve to indicate how much of the work in election technology (ours and others) has to get very detail oriented in order to actually deliver. Voter Registration

  • Released version 2.0 of the TTV Online Voter Registration tool.
  • Put OVRv2 into production, operated by Open Source Labs and managed by RockTheVote.
  • Under RTV's management, OVR has served well over 200,000 registrants for the 2010 election cycle, nearing the quarter-million total.

Election Management System

  • First-ever open source election management software deployed for use in DC and VA overseas voting projects in November 2010 elections.
  • TTV Election Manager supports DC legacy data formats, VIP standard election data for VA, DC-specific jurisdiction definitions, and first-ever new VA custom jurisdictions for local referenda.
  • First-ever system for computing and proofing and entire state's worth of election data and ballot definitions.

Ballot Design

  • First-ever open source paper ballot design system supports local and state specific ballot formats and composition rules for multiple jurisdictions including DC, VA, NH
  • For VA statewide election, over 2,700 locality-specific ballots generated, including first-ever state-law compliant ballots for special classes of non-local UOCAVA voters.
  • First-ever generation of dual-use ballot documents, the same document marked either digitally or physically to become the same legal paper ballot of record.

Overseas Ballot Distribution

  • Fully localized ballots delivered to thousands of UOCAVA voters worldwide
  • Data integration with state voter record databases, ensuring every eligible UOCAVA voter gets their correct ballot
  • Public test of Digital Ballot Return - a controversial activity with many lessons learned on all sides, but we're proud to have supported the D.C. BOEE in a rare example of responsible open public testing that should be the model for any assessment of new election technology.

Open-Source Software License

  • Released the OSDV Public License, or OPL, the first open source license specifically designed to aid state and local governments in acquiring open-source technology.
  • Published the OPL Rationale document, explaining the goals of the OPL and the reasoning behind each element of the OPL as meeting government needs for software licensing.

Public Speaking and Education

As you can see from these highlights -- the tip of the proverbial iceberg -- 2010 was a busy year for us. And 2011 is shaping up to be even busier!

-- EJS

Comment

Comment

Tabulator Troubles in Colorado

More tabulator troubles! In addition to the continuing saga in New York with the tabulator troubles I wrote about earlier, now there is another tabulator-related situation in Colorado. The news report from Saguache County CO is about:

a Nov. 5 “retabulation” of votes cast in the Nov. 2 election Friday by Myers and staff, with results reversing the outcome ...

In brief, the situation is exactly about the "tabulation" part of election management, that I have been writing about. To recap:

  • In polling places, there are counting devices that count up votes from ballots, and spit out a list of vote-counts for each candidate in each contest, and each option in each referendum. This list is in the form of a vote-count dataset on some removable storage.
  • At county election HQ, there are counting devices that count up vote-by-mail ballots and provisional ballots, with the same kind of vote-counts.
  • At county election HQ, "tabulation" is the process aggregating these vote-counts and adding them up, to get county-wide vote totals.

In Saguache, election officials did a tabulation run on election night, but the results  didn't look right. Then on the 5th, they did a re-run on the "same ballots" but the results were different, and it appears to some observers that some vote totals may be been overwritten. Then, on the 8th, with another re-try, a result somewhat like in NY:

... the disc would not load and sent an error message

What this boils down to for me is that current voting system products' Tabulators are not up to correctly doing some seemingly simple tasks correctly, when operated by ordinary election officials. I am sure they work right in testing situations that include vendor staff; but they must also work right in real life with real users. The tasks include:

  • Import an election definition that specifies how many counting devices are being used for each precinct, and how many vote-count datasets are expect from them.
  • Import a bunch of vote-count datasets.
  • Cross-check to make sure that all expected vote-totals are present, and that there are no un-expected vote-counts.
  • Cross-check each vote-count dataset to make sure it is consistent with the election definition.
  • If everything cross-checks correctly, add up the counts to get totals, and generate some reports.

That's not exactly dirt-simple, but it also sounds to me like something that could be implemented in well-designed software that is easy for election officials to use, and easy for observers to understand. And that understanding is critical, because without it, observers may suspect that the election has been compromised, and some election results are wrong. That is a terrible outcome that any election official would work hard to avoid -- but it appears that's what is unfolding in Saguache. Stay tuned ...

-- EJS

PS: Hats off to the Valley Courier's Teresa L. Benns for a really truly excellent news article! I have only touched on some of the issues she covered. Her article has some of the best plain-language explanation of complicated election stuff, that I have ever read. Please take a minute to at least scan her work. - ejs

Comment

1 Comment

Tabulator Technology Troubles

In my last post, I recounted an incident from Erie County NY, but deferred to today an account of what the technology troubles were, that prevented the routine use of a Tabulator to create county-wide vote totals by combining count data from each of the opscan paper ballot counting devices. The details are worth considering as a counter-example of technology that is not transparent, but should be. As I understand the incident, it wasn't the opscan counting systems that malfunctioned, but rather the portion of the voting system that tabulates the county-wide vote totals. As I described in an earlier post, the ES&S system has no tabulator per se, but rather some aggregation software that is part of the larger body of Election Management System (EMS) software that runs on an ordinary Windows PC. Each opscan devices writes data to a USB stick, and election officials aggregate the data by feeding each stick into the EMS. The EMS is supposed to store all the data on the stick, and add up all the opscan machines' vote counts into a vote total for each contest.

Last week, though, when Erie County officials tried to do so, the EMS rejected the data sticks. Election officials had no way to use the sticks to corroborate the vote totals that they had made by visually examining the election-night paper-tapes from the 130 opscan devices. Sensible questions: Did the devices' software err in writing the data to the sticks? If so, might the tapes be incorrect as well? Is the data still there? It turns out that the case was a bug in EMS software, not the devices, and in fact the data on the sticks was just fine. With a workaround on the EMS, the data was extracted from the sticks and used as planned. Further, the workaround did not require a bug fix to the software, which would have been illegal. Instead, some careful hand-crafting of EMS data enabled the software to stop choking on the data from the sticks.

Now, I am not feeling 100% great about the need for such hand-crafting, or indeed about the correctness of the totals produced by a voting system operating outside of its tested ordinary usage. But some canny readers are probably wondering about a simpler question. If the data was on the sticks, why not simply copy the files off the stick using a typical PC, and examine the contents of the files directly? With 40-odd contests countywide and a 100-odd sticks and paper tapes, it's not that much work to just look at the them to whether the numbers on each stick match those on the tapes. Answer: the voting system software is set up to prevent direct examination, that's why! The vote data can only be seen via the software in the EMS. And when that software glitches, you have to wonder about what you're seeing.

This is at least one area where better software design can lead to higher confidence system: write-once media for storing each counting device's tallies; use of public standard data formats so that anyone examine the data; use of human-usable formats so that anyone can understand the data; use of a separate, single-purpose tabulator device that operates autonomously from the rest of the voting system; publication of the tally data and the tabulator's output data, so that anyone can check the correct results either manually or with their choice of software. At least that's the TrustTheVote approach that we're working out now.

-- EJS

1 Comment