Viewing entries in
Architecture

Comment

Shifting the Conversation from “Shoring-up” to “Re-engineering”

This afternoon a bipartisan group of authorities on election administration and cybersecurity presented a Congressional Briefing on current election security challenges facing federal and state policymakers. While it was a worthy discussion, I keep having this sinking feeling that we’re simply re-arranging furniture on the deck of a large cruise ship steaming toward an icebreaker in the dark…

Comment

Comment

The Technical Challenges Facing iVoting

iVoting faces several technological challenges before it can begin to be implemented.  Most election officials and experts in the field are hesitant or skeptical about implementing iVoting with current Internet and Web technology.  Even when we view iVoting as simply returning a digital absentee ballot or the digital equivalent of voting by mail, as I explain in this installment of my series, there are still substantial innovations required....

Comment

Comment

Advancing Election Data Standards: View From the Trenches

Elections data standards are essential to delivering real innovation.  The annual Election Data Standards meeting opened today in Los Angeles, CA.  We thought we'd give you an overview of just what in the hec this is about and why its essential to creating a voting experience that's easy, convenient, and dare we say delightful.  Dry?  Kinda.  But a peek at the real in the trenches work we're doing.  Yep.

Comment

Comment

At the Risk of Running off the Rails

So, we have a phrase we like to use around here borrowed from the legal academic world.  Used to describe an action or conduct in analyzing a nuance in tort negligence, is the phrase "frolic and detour."  I am taking a bit of detour and frolicking in an increasingly noisy element of explaining the complexity of our work here.  (The detour comes from the fact that as "Development Officer" my charge is ensuring the Foundation and projects are financed, backed, supported, and succeed in adoption.  The frolic is in the form of commentary below about software development methodologies although I am not currently engaged or responsible for technical development outside of my contributions in UX/UI design.)  Yet, I won't attempt to deny that this post is also a bit of promotion for our stakeholders -- elections IT officials who expect us to address their needs for formal requirements, specifications, benchmarks, and certification, while embracing the agility and speed of modern development methodologies. This post was catalyzed by chit-chat at dinner last evening with an energetic technical talent who is jacked-up about the notion of elections technology being an open source infrastructure.  Frankly, in 5 years we haven't met anyone who wasn't jacked-up about our cause, and their energy is typically around "damn, we can do this quick; let's git 'er done!"  But it is about at this point where the discussion always seems to get a bit sideways.  Let me explain.

I guess I am exposing a bit of old school here, but having had the formal training in computer systems science and engineering (years ago) I believe data modeling -- especially for database-backed enterprise apps -- is an absolute imperative priority.  And the stuff of elections systems is serious technology, containing a significant degree of fault tolerance, integrity and verification assurance, and perhaps most important a sound data model.  And modeling takes time and requires documentation, both of which are nearly antithetical in today's pop culture of agile development.

Bear in mind, the TTV Project embraces agile methods for UX/UI development efforts. And there are a number of components in the TTV elections technology framework that do not require extensive up-front data modeling and can be developed purely in an iterative environment.

However, we claim that data modeling is critical for certain enterprise-grade elections applications because (as many seasoned architects have observed): [a] the data itself has meaning and value outside of the app that manipulates it, and [b] scalability requires a good DB design because you cannot just add in scalability later.  The data model or DB design defines the structure of the database and the relationships between the data sets; it is, in essence the foundation on which the application(s) are built.   A solid DB design is essential to achieve a scalable application.  Which leads to my lingering question:  How do agile development shops design a database?

I've heard the "Well, we start with a story..." approach.  And when I ask those who I really respect as enterprise software architects with real DB design chops, who also respect and embrace agile methodologies, they tend to express reservations about the agile mindset being boorishly applied to truly scalable, enterprise grade relational DB design that results in a well performing application, and related data integrity.

Friends, I have no intention of hating on agile principles of lightweight development methods -- they have an important role in today's application software development space and an important role here at the Foundation, but at the same time, I want to try to explain why we cannot simply just "bang out" new elections apps for ballot marking, tabulation, or ballot design and generation in a series of sprints and scrums.

First, in all candor, I fear this confusion rests in the reality that fewer and fewer developers today have had a complete computer science education, and cannot really claim to be disciplined software engineers or architects.  Many (not all) have just "hacked" with, and self-taught themselves, development tools because they built a web site or implemented a digital shopping bag for a friend (much like the well intentioned developer my wife and I met last evening).

Add in the fact, the formality and discipline of compiled code has given way to the rapid prototyping benefits of interpreted code.  And in the processes of this new modern training in software development (almost exclusively for the sandbox of the web browser as the UX/UI vehicle) what has been forgotten is that data modeling exists not because it creates overhead and delays, but because it removes such impediments.

Look at this another way.  I like to use building analogies -- perhaps because I began my collegiate studies long ago in architectural engineering before realizing that computer graphics would replace drafting.  There is a reason we spend weeks, sometimes months traveling by large holes in the ground with towers of re-bar, forms, and concrete pouring without any clue of what really will stand there once finished.  And yet, later as the skyscraper takes form, the speed with which it comes together seems to accelerate almost weekly.  Without that foundation carefully laid, the building cannot stand for any extended period of time, let alone bear the dynamic and static weights of its appointments, systems, and occupants.  So too, is this the case with complex, highly scalable, fault tolerant enterprise software -- without the foundation of a sold data model, the application(s) will never be sustainable.

I admit that I have been out of production grade software development (i.e., in the trenches coding, compiling; link, load, dealing with lint and running in debug mode) for years, but I can still climb on the bike and turn the pedals.  The fact is, data flow and data model could not be more different.  The former cannot exist without the latter.  It was well understood and data modeling has demonstrated many times that one cannot create a data flow out of nothing.  There has to be a base model as a foundation of one or more data flows, each mapping to its application.  Yet in our discussion punctuated by a really nice wine and great food, this developer seemed to want to dismiss modeling as something that can be done later... perhaps like refactoring (!?)

I am beginning to believe this fixation of modern developers with "rapid" non-data-model development is misguided, if not dangerous for its latent time shifted costs.

Recently, a colleague at another Company was involved with the development of a system where no time whatsoever was spent on data model design.  Indeed, the screens started appearing in record time.  The UX/UI was far from complete, but usable.  And the team was cheered as having achieved great "savings" in the development process.  However, when it came time to expand and extend the app with additional requirements, the developers waffled and explained they would have to recode the app in order to meet the new process requirements.  The data was unchanged, but processes were evolving.  The balance of the project ground to a halt in the dismissal of the first team over arguments about why requirements planning up front should have been done, and they figured out who to hire in to solve  it.

I read somewhere of another development project where the work was getting done in 2 week cycles. They were about 4 cycles away from finishing when on the tracker schedule a task called "concurrency" appeared for the next to last (penultimate) cycle.  The project subsequently imploded because all of the code had to be refactored (a core entity actually was determined to be two entities.)  Turns out that no upfront modeling led to this sequence of events, but unbelievably, the (agile) Development Firm working on the project, spun this as a "positive outcome;" that is they explained, "Hey, its a good thing we caught this a month before go-live."  Really?  Why wasn't that caught before that pungent smell of freshly cut code started wafting through the lab?

Spin doctoring notwithstanding, the scary thing to me is that performance and concurrency problems caused by a failure to understand the data are being caught far too late in the Agile development process, which makes it difficult if not impossible to make real improvements.  In fact, I fear that many agile developers have the misguided principle that all data models should be:

create table DATA
 (key INTEGER,
 stuff BLOB);

Actually, we shouldn't joke about this.  That idea comes from a scary reality: a DBA (database architect) friend tells about a development team he is interacting with on an outsourced State I.T. project that has decided to migrate a legacy non-Oracle application to Oracle using precisely this approach.  Data that had been stored as records in old ISAM type files, will be stored in Oracle as byte sequences in Blobs, with an added surrogate generated unique primary key.  When he asked what's the point of that approach, no one at the development shop could give him a reasonable answer other than "in the time frame we have, it works."   It begs the question: What do you call an Oracle Database where all the data in it is invisible to Oracle itself and cannot be accessed and manipulated directly using SQL?  Or said differently, would you call a set of numbered binary records a "database," or just "a collection of numbered binary records?"

In another example of the challenges of agile development in a database-driven app world, a DBA colleague describes being brought in on an emergency contract basis to an Agile project under development on top of Oracle, to deal with "performance problems" in the database.   Turns out the developers were using Hibernate and apparently relied on it to create their tables on an as-needed basis, simply adding a table or a column in response to incoming user requirements and not worrying about the data model until it crawled out of the code and attacked them.

This sort of approach to app development is what I am beginning to see as "hit and run."  Sure, it has worked so far in the web app world of start-ups: get it up and running as fast as possible, then exit quickly and quietly before they can identify you as triggering the meltdown when scale and performance start to matter.

After chatting with this developer last evening (and listening to many others over recent months lament that we're simply moving too slowly) I am starting to think of Agile development as a methodology of "do anything rather than nothing, regardless of whether its right."  And this may be to support the perception of rapid progress: "Look, we developed X components/screens/modules in the past week."  Whether any of this code will stand up to production performance environments is to be determined later.

Another Agile principle is of incremental development and delivery.   It's easy for a developer to strip out a piece of poorly performing code and replace it with a chunk that offers better or different capabilities.  Unfortunately, you just cannot do this in a Database.  For example: you cannot throw away old data in old tables and simply create new empty tables.

The TrustTheVote Project continues to need the kind of talent this person exhibited last evening at dinner.  But her zeal aside (and obvious passion for the cause of open source in elections), and at the risk of running off the (Ruby) rails here, we simply cannot afford to have these problems happen with the TrustTheVote Project.

Agile methodologies will continue to have their place in our work, but we need to be guided by some emerging realities, and appreciate that for as fast as someone wants to crank out a poll book app or a ballot marking device, we cannot afford to short-cut simply for the sake of speed.  Some may accuse me of being a waterfall Luddite in an agile world; however, I believe there has to be some way to mesh these things, even if it means requirements scrums, data modeling sprints, or animated data models.

Cheers GAM|out

Comment

Comment

D.C. Reality Check – The Opportunities and Challenges of Transparency

Gentle Readers:This is a long article/posting.  Under any other circumstance it would be just too long.

There has been much written regarding the public evaluation and testing of the District of Columbia’s Overseas “Digital Vote-by-Mail” Service (the D.C.’s label).  And there has been an equal amount of comment and speculation about technology supplied to the District from the OSDV Foundation’s TrustTheVote Project, and our role in the development of the D.C. service.  Although we’ve been mostly silent over the past couple of weeks, now enough has been determined so that we can speak to all readers (media sources included) about the project from our side of the effort.

The coverage has been extensive, with over 4-dozen stories reaching over 370 outlets not including syndication.  We believe it’s important to offer a single, contiguous commentary, to provide the OSDV Foundation’s point of view, as a complement to those of the many media outlets that have been covering the project.

0. The Working Relationship: D.C. BoEE & TrustTheVote Project Only geeks start lists with item “0” but in this case its meant to suggest something “condition-precedent” to understanding anything about our work to put into production certain components of our open source elections technology framework in D.C. elections.  Given the misunderstanding of the mechanics of this relationship, we want readers to understand 6 points about this collaboration with the District of Columbia's Board of Elections & Ethics (BoEE), and the D.C. I.T. organization:

  1. Role: We acted in the capacity of a technology provider – somewhat similar to a software vendor, but with the critical difference of being a non-profit R&D organization.  Just as has been the case with other, more conventional, technology providers to D.C, there was generally a transom between the OSDV Foundation’s TTV Project and the I.T. arm of the District of Columbia.
  2. Influence: We had very little (if any) influence over anything construed as policy, process, or procedure.
  3. Access: We had no access or participation in D.C.’s IT organization and specifically its data center operations (including any physical entry or server log-in for any reason), and this was for policy and procedural reasons.
  4. Advice: We were free to make recommendations and suggestions, and provide instructions and guidelines for server configurations, application deployment, and the like.
  5. Collaboration: We collaborated with the BoEE on the service design, and provided our input on issues, opportunities, challenges, and concerns, including a design review meeting of security experts at Google in Mountain View, CA early on.
  6. Advocacy: We advocated for the public review, cautioning that the digital ballot return aspect should be restricted to qualified overseas “UOCAVA” voters, but at all times, the BoEE, and the D.C. I.T. organization “called the shots” on their program.

And to go on record with an obvious but important point: we did not have any access to the ballot server, marked ballots, handling of voter data, or any control over any services for the same.  And no live data was used for testing.

Finally, we provided D.C. with several software components of our TTV Elections Technology Framework, made available under our OSDV Public License, an open source license for royalty-free use of software by government organizations.  Typical to nearly any deployment we have done or will do, the preexisting software did not fit seamlessly with D.C. election I.T. systems practices, and we received a “development grant” to make code extensions and enhancements to these software components, in order for them to comprise a D.C.-specific system for blank ballot download and an experimental digital ballot return mechanism (see #7 below).

The technology we delivered had two critically different elements and values.  The 1st, "main body of technology" included the election data management, ballot design, and voter user interface for online distribution of blank ballots to overseas voters.  With this in hand, the BoEE has acquired a finished MOVE Act compliant blank ballot delivery system, plus significant components of a new innovative elections management system that they own outright, including the source code and right to modify and extend the system.

For this system, BoEE obtained the pre-existing technology without cost; and for D.C-specific extensions, they paid a fraction of what any elections organization can pay for a standard commercial election management system with a multi-year right-to-use license including annual license fees.

D.C.’s acquired system is also a contrast to more than 20 other States that are piloting digital ballot delivery systems with DoD funding, but only for a one-time trial use.  Unlike D.C., if those States want to continue using their systems, they will have to find funding to pay for on-going software licenses, hosting, data center support, and the like.  There is no doubt, a comparison shows that the D.C. project has saved the District a significant amount of money over what they might have had to spend for ongoing support of overseas and military voters.

That noted, the other (2nd) element of the system – digital return of ballots – was an experimental extension to the base system that was tested prior to possible use in this year’s November election.  The experiment failed in testing to achieve the level of integrity necessary to take it into the November election.  This experimental component has been eliminated from the system used this year.  The balance of this long article discusses why that is the case, and what we saw from our point of view, and what we learned from this otherwise successful exercise.

1. Network Penetration and Vulnerabilities There were two types of intrusions as a result of an assessment orchestrated by a team at the University of Michigan led by Dr. Alex Halderman, probing the D.C. network that had been made available to public inspection.  The first was at the network operations level.  During the time that the Michigan team was testing the network and probing for vulnerabilities, they witnessed what appeared to be intrusion attempts originating from machines abroad from headline generating countries such as China and IranWe anticipate soon learning from the D.C. IT Operations leaders what network security events actually transpired, because detailed review is underway.  And more to that point, these possible network vulnerabilities, while important for the District IT operations to understand, were unrelated to the actual application software that was deployed for the public test that involved a mock election, mock ballots, and fictitious voter identities provided to testers.

2. Server Penetration and Vulnerabilities The second type of intrusion was directly on the District’s (let’s call it) “ballot server,” through a vulnerability in the software deployed on that server. That software included: the Red Hat Linux server operating system; the Apache Web server with standard add-ons; the add-on for the Rails application framework; the Ruby-on-Rails application software for the ballot delivery and return system; and some 3rd party library software, both to supplement the application software, and the Apache software.

The TrustTheVote Project provided 6 technology assets (see below, Section 7) to the BoEE project, plus a list of requirements for "deployment;" that is, the process of combining the application software with the other elements listed above, in order to create a working 3-tier application running on 3 servers: a web proxy server, an application server, and a database server.  One of those assets was a Web application for delivering users with a correct attestation document and the correct blank ballot, based on their registration records.  That was the "download" portion of the BoEE service, similar to the FVAP solutions that other states are using this year on a try-it-once basis.

3. Application Vulnerability Another one of those technology assets was an "upload" component, which performed fairly typical Web application functions for file upload, local file management, and file storage – mostly relying on a 3rd-party library for these functions.  The key D.C.-specific function was to encrypt each uploaded ballot file to preserve ballot secrecy.  This was done using the GPG file encryption program, with a command shell to execute GPG with a very particular set of inputs.  One of those inputs was the name of the uploaded file. 

And here was the sticking point.  Except for this file-encryption command, the library software largely performed the local file management functions.  This included the very important function of renaming the uploaded file to avoid giving users the ability to define file names on the server.  Problem: during deployment, a new version of this library software package was installed, in which the file name checks were not performed as expected by the application software.  Result: carefully crafted file names, inserted into the shell command, gave attackers the ability to execute pretty much any shell command, with the userID and privileges of the application itself.

Just as the application requires the ability to rename, move, encrypt, and save files, the injected commands could also use the same abilities.  And this is the painfully ironic point: the main application-specific data security function (file encryption), by incorrectly relying on a library, exposed those ballot files (and the rest of the application) to external tampering.

4.  Consequences The Michigan team was creative in their demonstration of the results of attacking a vulnerability in what Halderman calls a "brittle design," a fair critique common to nearly every Web application deployed using application frameworks and application servers.  In such a design, the application and all of its code operates as a particular userID on the server.  No matter how much a deployment constrains the abilities of that user and the code running as that user, the code, by definition, has to be able to use the data that the application manages.

Therefore, if there is a “chink” in any of the pieces the collective armor (e.g., the server, its operating system, web server, application platform, application software, or libraries) or the way they fit together, then that “chink” can turn use into an abuse.  That abuse applies to any and all of the data managed by the application, as well as the file storage used by the application.  As the Michigan teamed demonstrated, this general rule also applies specifically, when the application data includes ballot files.

5.  Mea Culpa Let’s be clear, the goof we made, and “our bad” in the application development was not anticipating a different version of the 3rd-party library, and not locking in the specific version that did perform file name checking that we assumed was done to prevent exactly this type of vulnerability.  And in fact, we learned 4 valuable lessons from this stumble:

  1. Factoring Time:  Overly compressed schedules will almost certainly ensure a failure point is triggered.  This project suffered from a series of cycle-time issues in getting stuff requisitioned, provisioned, and configured, and other intervening issues for the BoEE, including their Primary election which further negatively impacted the time frame.  This led to a very compressed amount of time to stage and conduct this entire exercise;
  2. Transparency vs. Scrutiny:  The desired public transparency put everyone involved in a highly concentrated light of public scrutiny, and margins of otherwise tolerable error allowed during a typical test phase were nonexistent in this setting – even the slightest oversight typically caught in a normal testing phase was considered fault intolerant, as if the Pilot were already in production;
  3. (Web) Application Design:  Web applications for high-value, high-risk data require substantial work to avoid brittleness.  Thankfully, none of the TrustTheVote Elections Technology Framework will require an Internet-connected Web application or service – so the 3rd lesson is how much of a relief that is going forward for us; and
  4. No Immunity from Mistake: Even the most experienced professionals are not immune from mistake or misstep, especially when they are working under very skeptical public scrutiny and a highly compressed time schedule our development team, despite a combined total of 7 decades of experience, included.

So, we learned some valuable lessons from this exercise. We still believe in the public transparency mandate, and fully accept responsibility for the goof in the application development and release engineering process.

Now, there is more to say about some wholly disconnected issues regarding other discovered network vulnerabilities, completely beyond our control (see #0 above), but we’ll save comment on that until after the D.C. Office of the CTO completes their review of the Michigan intrusion exercise.   Next, we turn attention to some outcomes.

6. Outcomes Let's pull back up to the 30-thousand foot level, and consider what the discussion has been about (leaving aside foreign hackers).  This test revealed a security weakness of a Web application framework; how there can be flaws in application-specific extensions to routine Web functions like file upload, including flaws that can put those functions and files at risk.  Combine that with the use of Web applications for uploading files that are ballots.  Then, the discussion turns on whether it is possible (or prudent) to try to field any Web application software, or even any other form of software, that transfers marked ballots over the Internet.  We expect that discussion to vigorously continue, including efforts that we’d be happy to see, towards a legislative ruling on the notion, such to Ohio’s decision to ban digital ballot transfer for overseas voting or North Carolina’s recent enthusiastic embrace of it.

However, public examination, testing, and the related discussions and media coverage, were key objectives of this project.  Rancorous as that dialogue may have become, we think it’s better than the dueling monologues that we witnessed at the NIST conference on overseas digital voting (reported here earlier).

But this is an important discussion, because it bears on an important question about the use of the Internet, which could range from (a) universal Internet voting as practiced in other countries (which nearly everyone in this discussion, including the OSDV Foundation, agrees is a terrible idea for the U.S.), to (b) the type of limited-scope usage of the Internet that may be needed only for overseas and military voters who really have time-to-vote challenges, or (c) limited only to ballot distribution.  For some, the distinction is irrelevant.  For others, it could be highly relevant.  For many, it is a perilous slippery slope.  It's just barely possible that worked examples and discussion could actually lead to sorting out this issue.

The community certainly does have some worked examples this year, not just the D.C. effort, and not just DoD’s FVAP pilots, but also other i-Voting efforts in West Virginia and elsewhere.  And thankfully, we hear rumors that NIST will be fostering more discussion with a follow-up conference in early 2011 to discuss what may have been learned from these efforts in 2010.  (We look forward to that, although our focus returns to open source elections technology that has nothing to do with the Internet!)

7. Our Technology Contributions Finally, for the record, below we catalog the technology we contributed to the District of Columbia’s Overseas “Digital Vote-by-Mail” service (again, their label).  If warranted, we can expand on this, another day.  The assets included:

  1. Three components of the open source TrustTheVote (TTV) Project Elections Technology Framework: [A] the Election Manager, [B] the Ballot Design Studio, and [C] the Ballot Generator.
  2. We augmented the TTV Election Manager and TTV Ballot Design Studio to implement D.C.-specific features for election definition, ballot design, and ballot marking.
  3. We extended some earlier work we’ve done in voter record management to accommodate the subset of D.C. voter records to be used in the D.C. service, including the import of D.C.-specific limited-scope voter records into an application-specific database.
  4. We added a Web application user experience layer on top of that, so that voters can identify themselves as matching a voter database record, and obtain their correct ballot (the application and logic leading up to the blank ballot "download" function referred to above) and to provide users with content about how to complete the ballot and return via postal or express mail services.
  5. We added a database extension to import ballot files (created by the TTV Ballot Generator), using a D.C.-specific method to connect them to the voter records in order to provide the right D.C.-specific ballot to each user.
  6. We added the upload capability to the web application, so that users could choose the option of uploading a completed ballot PDF; this capability also included the server-side logic to encrypt the files on arrival.

All of these items, including the existing open-source TTV technology components listed above in 7.1 above, together with the several other off-the-shelf open-source operating system and application software packages listed in Section 2 above, were all integrated by D.C’s IT group to comprise the “test system” that we’ve discussed in this article.

In closing, needless to say, (but we do so anyway for the record) while items 7.1—7.5 can certainly be used to provide a complete solution for MOVE Act compliant digital blank ballot distribution, item 7.6 is not being used for any purpose, in any real election, any time soon.

One final point worth re-emphasizing: real election jurisdiction value from an open source solution.....

The components listed in 7.1—7.5 above provide a sound on-going production-ready operating component to the District’s elections administration and management for a fraction of the cost of limited alternative commercial solutions.  They ensure MOVE Act compliance, and do not require any digital ballot return.  And the District owns 100% of the source code, which is fully transparent and open source.  For the Foundation in general, and the TrustTheVote Project in particular, this portion of the project is an incontrovertible success of our non-profit charter and we believe a first of its kind.

And that is our view of D.C.‘s project to develop their “Digital Vote-by-Mail” service, and test it along with the digital ballot return function.  Thanks for plowing through it with us.

Comment

Comment

How to Trust a Voting Machine

[Today's guest post is from election technology expert Doug Jones, who is now revealed as also being an encyclopedia of U.S. elections history. Doug's remarks below were in a discussion about how to effectively use post-election ballot-count audits as a means to gain trust in the correct operation of voting machines -- particularly timely, given the news and comment about hacking India's voting machines. Doug pointed out that in the U.S., we've had similar voting-machine trust issues for many years. -- ejs] Lever machines have always (as used in the US) contained one feature intended for auditing:  The public and protective counters, used to record the total number of activations of the machine.  Thus, they are slightly auditable.  They are less auditable than DRE machines built to 1990 standards because they retain nothing comparable to an event log and because they do not explicitly count undervotes -- allowing election officials to claim, post election, that the reason Sam got no votes was because people abstained rather than vote for him.  (Where in fact, there might have been a bit of pencil lead jammed in the counters to prevent votes for Sam from registering).

One of the best legal opinions about mechanical voting machines was a dissenting opinion by Horatio Rogers, a Rhode Island supreme court judge, in 1897.  He was writing about the McTammany voting machine, one that recorded votes by punching holes in a paper tape out of view of the voter.  I quote:

It is common knowledge that human machines and mechanisms get out of order and fail to work, in all sorts of unforseen ways. Ordinarily the person using a machine can see a result.  Thus, a bank clerk, performing a check with figures, sees the holes; an officer of the law, using a gibbet by pressing a button, sees the result accomplished that he sought; and so on ad infinitum. But a voter on this voting machine has no knowledge through his senses that he has accomplished a result.  The most that can be said is, if the machine worked as intended, then he has made his holes and voted.  It does not seem to me that this is enough.

I think Horatio Rogers opinion applies equally to the majority of mechanical and DRE machines that have been built in the century since he published it.

-- Doug Jones

Mandatory disclaimer:  The opinions expressed above are mine!  The various institutions with which I am affiliated don't necessarily agree.  These include the U of Iowa, and the EAC TGDC. - dj

Comment

Comment

Yes: security is hard

I came across this article, "NIST-certified USB Flash drives with hardware encryption cracked.". The money quote:

"The real question, however, remains unanswered – how could USB Flash drives that exhibit such a serious security hole be given one of the highest certificates for crypto devices? Even more importantly, perhaps – what is the value of a certification that fails to detect such holes?" (from "NIST-certified USB Flash drives with hardware encryption cracked.".)

I was quite intrigued by this article given that we talk blithely about using encrypted, write-once media to transfer information between various components of a voting system. I hadn't followed up with folks who know more about this than me, but I have a hard time understanding exactly encrypted, write-once media are or how they work or don't work.

You should draw your own conclusions about the significance of the linked article. I am actually not sure who "H-Security" is and what their particular angle or grindable axe might be. Also, Whether the security hole they report is big news or old hat among the cognoscenti. Stay tuned.

Comment