To our elections official stakeholders, Chief Technology Officer John Sebes covers a point that seems to be popping up in discussions more and more. There seems to be some confusion about what "open source" means in the context of software used for election administration or voting. That's understandable, because some election I.T. folks, and some current vendors, may not be familiar with the prior usage of the term "open source" -- especially since it is now used in so many different ways to describe (variously), people, code, legal agreements, etc. So, John hopes to get our Stakeholders back to basics on this.
Viewing entries tagged
Heartbleed is the latest high-profile consumer Internet security issue, only a few weeks after the “Goto Fail” incident. Both are recently discovered weaknesses in the way that browsers and Web sites interact. In both cases and others, I’ve seen several comments that connect these security issues with Internet voting. But because Heartbleed is pretty darn wicked, I can’t not share my thoughts on how it connects to the work we do in the TrustTheVote project – despite the fact that i-voting is not part of it. (In fact, we have our hands full fixing the many technology gaps in the types of elections that we already have today and will continue to have for the foreseeable future.)
First off, my thanks to a security colleague Matt Bishop who offered an excellent rant(his term not mine!) on Heartbleed and what we can learn from it, and the connection to open source. The net-net is familiar: computers, software, and networks are fundamentally fallible, there will always be bugs and vulnerabilities, and that’s about as non-negotiable as the law of gravity.
Here is my take on how that observation effects elections, and specifically the choice that many many U.S. election officials have made (and which we support), that elections should be based on durable paper ballots that can be routinely audited as a cross check on potential errors in automated ballot counting. It goes like this:
- Dang it, too many paper ballots with too many contests, to count manually.
- We’ll have to use computers to count the paper ballots.
- Dang it, computers and software are inherently untrustworthy.
- Soooo …. we’ll use sound statistical auditing methods to manually check the paper ballots, in order to check the work of the machines and detect their malfunctions.
This follows the lessons of the post-hanging-chads era:
- Dang it, too many paper ballots with too many contests, to count manually.
- We’ll have to use computers to directly record votes, and ditch the paper ballots.
- Dang it, computers and software are inherently untrustworthy.
- Oops, I guess we need the paper ballots after all.
I think that these sequences are very familiar to most readers here, but its worth a reminder now and then from experts on the 3rd point – particularly when the perennial topic of i-voting comes up– because there, the sequence is so similar yet so different:
- Dang it, voters too far away for us to get their paper ballots in time to count them.
- We’ll have to use computers and networks to receive digital ballots.
- Dang it, computers and software and networks are inherently untrustworthy.
- Soooo …. Oops.
The TrustTheVote Project Core Team has been hard at work on the Alpha version ofVoteStream, our election results reporting technology. They recently wrapped up a prototype phase funded by the Knight Foundation, and then forged ahead a bit, to incorporate data from additional counties, provided by by participating state or local election officials after the official wrap-up.
Along the way, there have been a series of postings here that together tell a story about the VoteStream prototype project. They start with a basic description of the project in Towards Standardized Election Results Data Reporting and Election Results Reload: the Time is Right. Then there was a series of posts about the project’s assumptions about data, about software (part one and part two), and about standards and converters (part one and part two).
Of course, the information wouldn’t be complete without a description of the open-source software prototype itself, provided Not Just Election Night: VoteStream.
Actually the project was as much about data, standards, and tools, as software. On the data front, there is a general introduction to a major part of the project’s work in “data wrangling” in VoteStream: Data-Wrangling of Election Results Data. After that were more posts on data wrangling, quite deep in the data-head shed — but still important, because each one is about the work required to take real election data and real election result data from disparate counties across the country, and fit into a common data format and common online user experience. The deep data-heads can find quite a bit of detail in three postings about data wrangling, in Ramsey County MN, in Travis County TX, and in Los Angeles CountyCA.
Today, there is a VoteStream project web site with VoteStream itself and the latest set of multi-county election results, but also with some additional explanatory material, including the election results data for each of these counties. Of course, you can get that from the VoteStream API or data feed, but there may be some interest in the actual source data. For more on those developments, stay tuned!
Today I'm continuing with the second of a 3-part series about what we at the TrustTheVote Project are hoping to prove in our Election Night Reporting System project. As I wrote earlier, we have assumptions in three areas, one of which is software. I'll try to put into a nutshell a question that we're working on an answer to:
If you were able to get the raw election results data available in a wonderful format, what types of useful Apps and services could you develop?
OK, that was not exactly the shortest question, and in order to understand what "wonderful format" means, you'd have to read my previous post on Assumptions About Data. But instead, maybe you'd like to take a minute to look at some of the work from our previous phase of ENRS work, where we focused on two seemingly unrelated aspects of ENRS technology:
- The user experience (UX) of a Web application that local election officials could provide to help ordinary folks visualize and navigate complex election results information.
- A web services API that would enable other folk's systems (not elections officials) to receive and use the data in a manner that's sufficiently flexible for a variety other services ranging from professional data mining to handy mobile apps.
They're related because the end results embodied a set of assumptions about available data.
Now we're seeing that this type of data is available, and we're trying to prove with software prototyping that many people (not just elections organizations, and not just the TrustTheVote Project) could do cool things with that data.
There's a bit more to say -- or rather, to show and tell -- that should fit in one post, so I'll conclude next time.
PS: Oh there is one more small thing: we've had a bit of an "Ah-ha" here in the Core Team, prodded by our peeps on the Project Outreach team. This data and the apps and services that can leverage that data for all kinds of purposes has use far beyond the night of an election. And we mentioned that once before, but the ah-ha is that what we're working on is not just about election night results... its about all kinds of election results reporting, any time, any where. And that means ENRS is really not that good of a code name or acronym. Watch as "ENRS" morphs into "E2RP" for our internal project name -- Election Results Reporting Platform.
Now that we are a ways into our "Election Night Reporting System" project, we want to start sharing some of what we are learning. We had talked about a dedicated Wiki or some such, but our time was better spent digging into the assignment graciously supported by the Knight Foundation Prototype Fund. Perhaps the best place to start is a summary of what we've been saying within the ENRS team, about what we're trying to accomplish. First, we're toying with this silly internal project code name, "ENRS" and we don't expect it to hang around forever. Our biggest grip is that what we're trying to do extends way beyond the night of elections, but more about that later.
Our ENRS project is based on a few assumptions, or perhaps one could say some hypotheses that we hope to prove. "Prove" is probably a strong word. It might better to say that we expect that our assumptions will be valid, but with practical limitations that we'll discover.
The assumptions are fundamentally about three related topics:
- The nature and detail of election results data;
- The types of software and services that one could build to leverage that data for public transparency; and
- Perhaps most critically, the ability for data and software to interact in a standard way that could be adopted broadly.
As we go along in the project, we hope to say more about the assumptions in each of these areas.
But it is the goal of feasible broad adoption of standards that is really the most important part. There's a huge amount of latent value (in terms of transparency and accountability) to be had from aggregating and analyzing a huge amount of election results data. But most of that data is effectively locked up, at present, in thousands of little lockboxes of proprietary and/or legacy data formats.
It's not as though most local election officials -- the folks who are the source of election results data, as they conduct elections and the process of tallying ballots -- want to keep the data locked up, nor to impede others' activities in aggregating results data across counties and states, and analyzing it. Rather, most local election officials just don't have the means to "get the data out" in way that supports such activities.
We believe that the time is right to create the technology to do just that, and enable election officials to use the technology quickly and easily. And this prototype phase of ENRS is the beginning.
Lastly, we have many people to thank, starting with Chris Barr and the Knight Foundation for its grant to support this prototype project. Further, the current work is based on a previous design phase. Our thanks to our interactive design team led by DDO, and the Travis County, TX Elections Team who provided valuable input and feedback during that earlier phase of work, without which the current project wouldn't be possible.
I'm still feeling a bit stunned by recent events: the IRS has finally put us at the starting point that we had reasonably hoped to be at about 5 years ago. Since then, election tech dysfunction hasn't gone away; U.S. election officials have less funding than ever to run elections; there are more requirements than ever for the use of technology in election-land; and there are more public expectations than ever of operational transparency of "open government" certainly including elections; and the for-profit tech sector does not offer election officials what they need. So there's more to do than we ever expected, and less time to do it in. For today, I want to re-state a focus on "open data" as the part of "open source" that's used by "open gov" to provide "big data" for public transparency. Actually I don't have anything new to say, having re-read previous posts:
It's still the same. Information wants to be free, and in election land, there is lots of it that we need to see, in order to "trust but verify" that our elections are all that we hope them to be. I'm very happy that we now have a larger scope to work in, to deliver the open tech that's needed.
I am pleased to announce to our readers that the IRS has granted our 7-year old organization full unbridled tax exempt status under section 501(c)(3) of the Internal Revenue Code as a public charity. This brings to a close an application review that consumed over 6 years—one of the longest for a public benefits non-profit organization. Our Chief Development Officer, Gregory Miller has already offered his insight this morning, but I want to offer a couple of thoughts from my view point (which I know he shares). By now, you may have seen the WIRED Magazine article that was published this morning. Others here will surely offer some additional comment of their own in separate posts. But it does set the context for my brief remarks here.
First, to be sure, this is a milestone in our existence because the Foundation’s fund raising efforts and corresponding work on behalf of elections officials and their jurisdictions nationwide has been largely on hold since we filed our original IRS Form 1023 application back in February 2007.
The Foundation has managed to remain active through what self-funding we could afford, and through generous grants from individuals and collaborating organizations that continued to support the “TrustTheVote™ Project” despite our "pending" status.
A heartfelt "thank you" to Mitch Kapor, Heather Smith and Rock the Vote, Alec Totic, Matt Mullenweg, Pito Salas, the Gregory Miller family and the E. John Sebes family (to name a few of the those who so believed in us early on to offer their generous support). The same thanks goes to those who wished to remain anonymous in their support.
In addition to our being set free to move full speed ahead on our charter, I think this is interesting news for another reason: this project, which has a clear charitable cause with a compelling public benefit, was caught up in an IRS review perhaps mostly for having the wrong words in its corporate name.
Our case became entangled in the so-called “Bolo-Gate” scandal at the IRS Exempt Division. And we unintentionally became a poster child for be-on-the-lookout reviews as such applied to entities involved in open source technology.
In sum and substance, our case required 6 years and 4 months for the IRS to decide. The Service ultimately dragged us into our final administrative remedy, the "conference-of-right" we participated in last November, following their "intent to deny" letter in March of last year. Then it took the IRS another 220 days to finally decide the case, albeit in our favor, but not before we had a] filed close to 260 pages of interrogatory responses, of which 182 were under affidavits; b] developed nearly 1,600 pages of total content; and c] ran up a total bill for legal and accounting fees over those years in excess of $100,000.
We’ve definitely learned some things about how to handle a tax exempt application process for an organization trying to provide public benefit in the form of software technology, although frankly, we have no intentions or interest in ever preparing another.
But there is a story yet to be told about what it took for us to achieve our 501(c)(3) standing—a status that every single attorney, CPA, or tax expert who reviewed our case over the years believed we deserved. That noted, we are very grateful to our outside tax counsel team at Caplin Drysdale led by Marc Owen, who helped us press our case.
I am also deeply relieved that we need not raise a legal defense fund, but instead can finally start turning dollars towards the real mission: developing accurate, transparent, verifiable, and more secure elections technology for public benefit rather than commercial gain. Its not lost on us, nor should it be on you, how we could've spent the money we need to pay to our lawyers and accountants on advancing the substantive cause of the TrustTheVote Project.
So, now its time to focus ahead, get to work, and raise awareness of the TrustTheVote Project and the improvements it can bring to public elections.
We're a legitimate legally recognized 501(c)(3) tax exempt public benefits corporation. And with that you will begin to see marked changes in our web sites, our activities. Stay tuned. We're still happily reeling a bit from the result, but wrapping our heads around what we need to do now that we have the designation we fought for 6 years to have in order to fund the work our beneficiaries -- elections jurisdictions nationwide -- so deserve.
Please join me in acknowledging this major step and consider supporting our work going forward. After all, now it really can be tax deductible (see your accountant and lawyer for details).
Best Regards, Christine M. Santoro Secretary, General Counsel
Today is a bit of a historical point for us: we can publicly announce the news of the IRS finally granting our tax exempt status. The digital age is wreaking havoc, however, on the PR and news processes. In fact, we knew about this nearly 2 weeks ago, but due to a number of legal and procedural issues and a story we were being interviewed for, we were on hold in making this important announcement. And we're still struggling to get this out on the wires (mostly due to a change our of our PR Agency at the most inopportune moment).
I have to observe, that notwithstanding a paper-chase of near epic proportions with the IRS in granting us what we know our charter deserves to do foster the good work we intend, at the end of the day, 501(c)(3) status is a gift from the government. And we cannot lose sight of that.
So, for the ultimate outcome we are deeply grateful, please make no mistake about that. The ways and means of getting there was exhausting... emotionally, financially, and intellectually. And I notice that the WIRED article makes a showcase of a remark I made in one of the many interviews and exchanges leading up to that story about being "angry."
I am (or was) angry at the process because 6 years to ask and re-ask us many of the same questions, and perform what I humbly believe at some point amounted to intellectual naval gazing, was crazy. I can't help but feel like we were being bled. I fear there are many other valuable public benefit efforts, which involve intangible assets, striving for the ability to raise public funds to do public good, who are caught up in this same struggle.
What's sad, is that it took the guidance and expertise (and lots of money that could be spent on delivering the on our mission) of high powered Washington D.C. lawyers to negotiate this to successful conclusion. That's sad, because the vast majority of projects cannot afford to do that. Had we not been so resolute in our determination, and willing to risk our own financial stability to see this through, the TrustTheVote Project would have withered and died in prosecution of our tax exempt status over 6 years and 4 months.
Specifically, it took the expertise and experience of Caplin Drysdale lawyers Michael Durham and Marc Owen himself (who actually ran the IRS Tax Exempt Division for 10 years). If you can find a way to afford them, you can do no better.
There is so much that could be shared about what it took and what we learned from issues of technology licensing, to nuances of what constitutes public benefit in terms of IRS regulations -- not just what seems obvious. Perhaps we'll do so another time. I note for instance that attorney Michael Durham was a computer science major and software engineer before becoming a tax lawyer. I too have a very similar combination background of computer science and intellectual property law, and it turned out to be hugely helpful to have this interdisciplinary view -- just odd that such would be critical to a tax exempt determination case.
However, in summary, I was taught at a very young age and through several life lessons that only patience and perseverance empower prevailing. I guess its just the way I, and all of us on this project are wired.
Cheers GAM | out
Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”). For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote. For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability. So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.
We've been hearing much lately about State and local election officials' appetite (or infatuation) for digital poll books. We've been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book. But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood. And we think the gap shows why physical custody is so important—especially so for digital poll books.
The first two obvious purposes of a poll book are to  check in a voter as a prerequisite to obtaining a ballot, and  to prevent a voter from having a second go at checking-in and obtaining a ballot. That's fine for meeting the "Eligibility" and "Non-duplication" requirements for in-person voting.
But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood. In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting. Bear in mind, those "delayed-cast" ballots can't be evaluated until after the post-election poll-book-intake process is complete.
To explain why, let's consider one fairly typical approach to absentee evaluation. The poll book intake process results in an update to the voter record of every voter who voted in person. Then, the voter record system is used as one part of absentee and provisional ballot processing. Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record. If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.
So far, that's a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).
The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added). With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.
With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that's managed by the tablet, and that represents who voted) can be separated, although it should not.
Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message. It's essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place. Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected. Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.
That's why digital poll books must be carefully designed for use by election staff in a way that doesn't endanger the integrity of the data. And this is an example of the devil in the details that's so common for innovative election technology.
Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.
Marvelous simplicity, right? Oh hec, I'm sure some grant-funded project could build that right away. But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on. You wouldn't put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.
A Side Note: This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts. They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today. Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web. And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices. But there is far more to it, than a sexy interface with a 4G connection. Oops, I digress to a tangential gripe.
This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in  the data integration (to and from the voter record system);  the data management (to and from each individual digital poll book), and  the data integrity (maintaining the same control present in paper poll books).
Without a doubt, the voter's user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we're gathering plenty of requirements and feedback based on our current work. But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.
Thoughts? Your ball (er, ballot?) GAM | out
OSDV's own Anne O'Flaherty presented at the National Institute of Standards and Technology (NIST) last week to a workshop on common data formats for election data interchange. As readers will know, we did a pile of work with Virginia State Board of Elections (SBE) this past year. Anne led that project, and her presentation was about it, for the audience of data standards folks. But today I wanted to comment on a question from the audience.
Q: Was there any trouble with the open source nature of the software that SBE adopted? And the cloud deployment? Don't government IT people often have problems with that?
A: No problems at all! SBE specifically required open source software when they applied for the Federal grant for this project, and specifically wanted cloud deployments.
But the complete answer explains why SBE made those choices, and why differently than other government IT people who have problems with open source or cloud. I can only provide my personal reflections on this of course, but I think that they are instructive.
Common Mis-conception on Open Source
On open source, government IT people often have the wrong problem, thinking that open source means whacked together by volunteers, and you have to deploy and support it yourself. Often true, but not in every case. In this case, the software was available under an open source license that very carefully and specifically addressed the usual concerns of government adopters. And in this case, SBE had an application hosting provider company that they contract with to do cloud deployments of web applications, and provide service and support.
Common Mis-calculation on Cloud Hosting
On cloud, there is a similar mis-understanding, very well illustrated by I.T. procurement options in the VA state government.
- One option for deploying a new application is to work with the large system integrator to whom VA has outsourced their data center operations for some years now. That is, VA is responsible for the facilities and procurement, and the SI is responsible for deploying application software, supporting the servers and networks, etc.
- Then there is the cloud option. In that option, it is the hosting company that is responsible for facilities and hardware, and also does everything the SI does in the first approach.
That's "cloud" -- your provider has the physical infrastructure, and you don't. Once a government IT group has already outsourced data center operations to a for-profit company, then that is really the only difference.
Oh, wait, there is an important difference -- cloud providers nowadays are significantly less expensive than is typical of the cost structure defined years ago during a government procurement process for data center outsourcing.
Heres's a Good Fit
With those mis-conceptions cleared up, consider the opportunity.
- The cloud is preferable for cost and/or service.
- The cloud provider is happy to deploy and maintain either or both of
- commercial software from a vendor,
- an open-source application from a public software repository.
- The open-source application has a license (without fee required) that neatly takes care of a number of numbing details of license law particular to governments.
That's not all there is to it, of course, and cloud deployment isn't right for everything -- for example, the voter record database really does need to be in the state's datacenter under the direct control of state I.T. people. But in the case of the open-source applications SBE deployed in 2012, both open-source and cloud proved to be a good fit.
Many thanks to the engaged audience for OSDVer Anne O'Flaherty's presentation yesterday at National Institute of Standards and Technology (NIST), which hosted a workshop on Common Data Formats (CDFs) and standards for data interchange of election data. We had plenty to say, based on our 2012 work with Virginia State Board of Elections (SBE), because that collaboration depends critically on CDFs. Anne and colleagues did a rather surprising amount of data wrangling over many weeks to get things all hooked up right, and the lessons learned are important for continuing work in the standards body, both NIST and the IEEE group working on CDF standards.
As requested by the attendees, here are online versions of the poster and the slides for the presentation "Bringing Transparency to Voter Registration and Absentee Voting."
"Why is There a Voting Tech Logjam?" -- that's a good question! A full answer has several aspects, but one of them is the acitivty (or in-activity) at the Federal level, that leads to very limited options in election tech. For a nice pithy explanation of that aspect, check out the current issue of the newsletter of the National Conference of State Legislators, on page 4. One really important theme addressed here is the opportunity for state lawmakers to make their decisions about what standards to use, to enable the state's local election officials make their decisions about what technology to make or adopt -- including purchase, in-house build, and (of course) adoption and adaptation of open-source election technology.
In this New Year, there are so many new opportunties for election tech work that our collective TrustTheVote head is spinning. But this week anyway, we're focused on next steps in our online voter registration (OVR) work -- planning sessions last week, meetings with state election officials this week, and I hope as a result, a specific plan of action on what we call will "Rocky 4". To refresh readers' memory, Rocky is the OVR system that spans several organizations:
- At OSDV, we developed and maintain the Rocky core software;
- RockTheVote adopted and continues to adopt extensions to it;
- RockTheVote also adapts the Rocky technology to its operational environment (more on that below, with private-label and API);
- Open Source Labs operates Rocky's production system, and a build and test environment for new software releases;
- Several NGOs that are RockTheVote partners also use Rocky as their own OVR system, essentially working with RTV as a public service (no fees!) provider of OVR as an open-source application-as-a-service;
- For a growing list of states that do OVR, Rocky integrates with the state OVR system, to deliver to it the users that RTV and these various other NGOs have connected to online a a result of outreach efforts.
With that recap in mind, I want to highlight some of the accomplishments that this collective of organizations achieved in 2012, and paved the way for more cool stuff in 2013.
- All told, this group effort resulted in over a million -- 1,058,994 -- voter registration applications completed.
- Dozens of partner organizations used Rocky to register their constituents, with the largest and most active being Long Distance Voter.
- We launched a private-label capability in Rocky (more below) that was used for the first time this summer, and the top 3 out of 10 private-label partners registered about 84,000 voters in the first-time use of this new Rocky feature, in a period of about 12 weeks.
- We launched an API in Rocky (more below), and the early adopter organizations registered about 20,000 voters.
That's what I call solid work, with innovative election technology delivering substantial public benefit.
Lastly, to set the stage for upcoming news about what 2013 holds, let me briefly explain 2 of the new technologies in 2012, because they're the basis for work in 2013. Now, from the very beginning of Rocky over 3 years ago, there was a feature called "partner support" where a a 3rd party organization could do a little co-branding in the Rocky application, get a URL that they could use to direct their users to Rocky (where the users would see the 3rd party org's logo), and all the resulting registration activity's stats would be available to the 3rd party org.
The Rocky API - But suppose that you're in an organization that has not just its own web site, but a substantial in-house web application? Suppose that you want your web application to do the user interaction (UI)? Well, the Rocky Application Programming Interface (API) is for just that. Your application do all the UI stuff, and when it's time to create a PDF for the voter to download, print, sign, and mail, your web app calls the Rocky API to request that, and get the results back. (There's an analogous workflow for integrating the state OVR systems for paperless online registration.) The Rocky backend does all the database work, PDF generation, state integration, stats, reporting, and the API also allows you to pull back stats if you don't want to manually use the Partners' web interface of Rocky.
Rocky Private Label - But suppose instead that you want something like that, but you don't actually want to run your own web application. Instead, you want a version of Rocky that's customized to look like a web property of your organization, even though it is operated by RockTheVote. That's what the private-label feature set is for. To get an idea of what it looks like, check out University of CA Student Association's private-label UI on Rocky, here.
That's the quick run-down on what we accomplished with Rocky in 2012, and some of the enabling technology for that. I didn't talk much about integration with state OVR systems, because enhancements to the 2012 "training wheels" is part of what we're up to now -- so more on that to come RSN.
And on behalf of all my colleagues in the TrustTheVote Project and at the OSDV Foundation, I want to thank RockTheVote, Open Source Labs, all the RTV partners, and last but not least several staff at state election offices, for making 2012 a very productive year in the OVR part of OSDV's work.
Despite today's blog docket being for RockTheVote, I just can't resist pointing out a recurring type of technology-triggered election dysfunction that is happening again, and is 100% preventable using election technology that we have already developed. Here's the scoop: in St. Lucie County, Florida, the LEOs are having trouble coming up with a county wide grand total of votes, because their adding machine (for totting up the the subtotals from dozens of voting machines) has a great feature for human error. The full details are bit complex in terms of handling of data sticks and error messages, but I've been told that in early voting in 94 precincts, 40 precincts weren't counted at all, and 54 were counted twice. Thank goodness someone noticed afterwards! (Well, 108 precincts totaled out of 94 might have been a tip off.) Sure, human error was involved, but it is not a great situation where software allows this human error to get through.
We're only talking about software that adds up columns of numbers here! A much better solution would be one where the software refuses to add in any sub-total more than once, and refuses to identify as a finished total anything where there is a sub-total missing. Of course! And I am sure that the vendor of St. Lucie's GEMS system has a fix for this problem in some later version of the software or some successor product. But that's just not relevant if an election official doesn't have the time, budget, support contract, or procurement authority to test the better upgrade, and buy it if it works satisfactorily!
What's sad is that it is completely preventable by using an alternative adding machine like the one we developed last year (OK, shameless plug) -- which of course does all these cross-checks. The LEOs would need to translate that vendor-proprietary subtotal data into a standard format -- and I know some volunteer programmers who I bet would do that for them. They'd need to use an ordinary PC to run the open source tabulation software -- and I know people who would set it up for them as a public service. And they'd have to spend less than half an hour using the system to get their totals, and comparing them to the totals that their GEMS system provided.
And maybe, in order for it to be kosher, it would have to be a "pilot effort" with oversight by the EAC; we've already discussed that with them and understand that the resource requirements are modest. I bet we could find a FL philanthropist who would underwrite the costs without a 2nd thought other than how small the cost was compared to the public benefit of the result - that is, avoiding one more day of delay in a series that's causing a State to not be done with the election, more than a week after election day.
It's just one example of the many possible election integrity benefits that can be demonstrated using technology that, so far at any rate, only non-commercial technologists have been willing to develop for governments to use to do their job correctly -- in this case, producing timely and accurate election results.
Much as I admire everybody at the New York Times, I have to disagree with Nick Bilton on his piece Disruptions: Casting a Ballot by Smartphone. I have to say I don't blame him though, especially given the broad range of coverage of the many many kinds election dysfunction that occurred and are still occurring now during state canvassing....
I've spent a fair bit of time over the last few days digesting a broad range of media responses to last week's election's operation, much it reaction to President Obama's "we've got to fix that" comment in his acceptance speech. There's a lot of complaining about the long lines, for example, demands for explanation of them, or ideas for preventing them in te future -- and similar for the difficulty that some states and counties face for finishing the process of counting the ballots. It's a healthy discussion for the most part, but one that makes me sad because it mostly misses the main point: the root cause of most election dysfunction. I can explain that briefly from my viewpoint, and back that up with several recent events. The plain unvarnished truth is that U.S. local election officials, taken all together as the collective group that operates U.S. federal and state elections, simply do not have the resources and infrastructure to conduct elections that
- have large turnout and close margins, preceded by much voter registration activity;
- are performed with transparency that supports public trust in the integrity of the election being accessible, fair, and accurate.
There are longstanding gaps in the resources needed, ranging from ongoing budget for sufficient staff, to inadequate technology for election administration, voting, counting, and reporting.
Of course in any given election, there are local elections operations that proceed smoothly, with adequate resources and physical and technical infrastructure. But we've seen again and again, that in every "big" election, there is a shifting cast of distressed states or localities (and a few regulars), where adminstrative snafus, technology glitches, resource limits, and other factors get magnified as a result of high participation and close margins. Recent remarks by Broward County, FL, election officials -- among those with the most experience in these matters -- really crystalized it for me. When asked about the cause of the long lines, their response (my paraphrase) is that when the election is important, people are very interested in the election, and show up in large numbers to vote.
That may sound like a trivial or obvious response, but consider it just a moment more. Another way of saying it is that their resources, infrastructure, and practices have been designed to be sufficient only for the majority of elections that have less than 50% turnout and few if any state or federal contests that are close. When those "normal parameters" are exceeded, the whole machinery of elections starts grinding down to a snail's pace. The result: an election that is, or appears to be, not what we expect in terms of being visibily fair, accessible, accurate, and therefore trustworthy.
In other words, we just haven't given our thousands of localities of election officials what they really need to collectively conduct a larger-than-usual, hotly contested election, with the excellence that they are required to deliver, but are not able to. Election excellence is, as much as any of several other important factors, a matter of resources and infrastructure. If we could somehow fill this gap in infrastructure, and provide sufficient funding and staff to use it, then there would be enormous public benefits: elections that are high-integrity and demonstrably trustworthy, despite being large-scale and close.
That's my opinion anyway, but let me try to back it up with some specific and recent observations about specific parts of the infrastructure gap, and then how each might be bridged.
- One type of infrastructure is voter record systems. This year in Ohio, the state voter record system poorly served many LEOs who searched for but didn't find many many registered absentee voters to whom they should have mailed absentee ballots. The result was a quarter million voters forced into provisional voting -- where unlike casting a ballot in a polling place, there is no guarantee that the ballot will be counted -- and many long days of effort for LEOs to sort through them all. If the early, absentee, and election night presidential voting in Ohio had been closer, we would still be waiting to hear from Ohio.
- Another type of infrastucture is pollbooks -- both paper, and electronic -- and the systems that prepare them for an election. As usual in any big election, we have lots of media anecdotes about people who had been on these voter rolls, but weren't on election day (that includes me by the way). Every one of these instances slows down the line, causes provisional voting (which also takes extra time compared to regular voting), and contributes to long lines.
- Then there are the voting machines. For the set of places where voting depends on electronic voting machines, there are always some places where the machines don't start, take too long get started, break, or don't work right. By now you've probably seen the viral youtube video of the touch screen that just wouldn't record the right vote. That's just emblematic of the larger situation of unreliable, aging voting systems, used by LEOs who are stuck with what they've got, and no funding to try to get anything better. The result: late poll opening, insufficient machines, long lines.
- And for some types of voting machines -- those that are completely paperless -- there is simply no way to do a recount, if one is required.
- In other places, paper ballots and optical scanners are the norm, but they have problems too. This year in Florida, some ballots were huge! six pages in many cases. The older scanning machines physically couldn't handle the increased volume. That's bad but not terrible; at least people can vote. However, there are still integrity requirements -- for example, the voters needs to put their unscanned ballots in an emergency ballot box, rather than entrust a marked ballot to a poll worker. But those crazy huge ballots, combined with the frequent scanner malfunction, created overstuffed full emergency ballot boxes, and poll workers trying to improvise a way store them. Result: more delays in the time each voter required, and a real threat to the secret ballot and to every ballot being counted.
Really, I could go on for more and more of the infrastructure elements that in this election had many examples of dysfunction. But I expect that you've seen plenty already. But why, you ask, why is the infrastructure so inadequate to the task of a big, complicated, close election conducted with accessibility, accuracy, security, transparency, and earning public trust? Isn't there something better?
The sad answer, for the most part, is not at present. Thought leaders among local election officials -- in Los Angeles and Austin just to name a couple -- are on record that current voting system offerings just don't meet their needs. And the vendors of these systems don't have the ability to innovate and meet those needs. The vendors are struggling to keep up a decent business, and don't see the type of large market with ample budgets that would be a business justification for new systems and the burdensome regulatory process to get them to market.
In other cases, most notably with voter records systems, there simply aren't products anymore, and many localities and states are stuck with expensive-to-maintain legacy systems that were built years ago by big system integrators, that have no flexibility to adapt to changes in election administration, law, or regulation, and that are too expensive to replace.
So much complaining! Can't we do anything about it? Yes. Every one of those and other parts of election infrastructure breakdowns or gaps can be improved, and could, if taken together, provide immense public benefit if state and local election officials could use those improvements. But where can they come from? Especially if the current market hasn't provided, despite a decade of efforts and much federal funding? Longtime readers know the answer: by election technology development that is outside of the current market, breaks the mold, and leverages recent changes in information technology, and the business of information technology. Our blog in the coming weeks will have several examples of what we've done to help, and what we're planning next.
But for today, let me be brief with one example, and details on it later. We've worked with state of Virginia to build one part of new infrastructure for voter registration, and voter record lookup, and reporting, that meets existing needs and offers needed additions that the older systems don't have. The VA state board of elections (SBE) doesn't pay any licensing fees to use this technology -- that's part of what open source is about. The don't have to acquire the software and deploy it in their datacenter, and pay additional (and expensive) fees to their legacy datacenter operator, a government systems integrator. They don't have to go back to the vendor of the old system to pay for expensive but small and important upgrades in functionality to meet new election laws or regulations.
Instead, the SBE contracts with a cloud services provider, who can -- for a fraction of the costs in a legacy in-house government datacenter operated by a GSI -- obtain the open-source software, integrate it with the hosting provider's standard hosting systems, test, deploy, operate, and monitor the system. And the SBE can also contract with anyone they choose, to create new extensions to the system, with competition for who can provide the best service to create them. The public benefits because people anywhere and anytime can check if they are registered to vote, or should get an absentee ballot, and not wait like in Ohio until election day to find out that they are one in a quarter million people with a problem.
And then the finale, of course, is that other states can also adopt this new voter records public portal, by doing a similar engagement with that same cloud hosting provider, or any other provider of their choice that supports similar cloud technology. Virginia's investment in this new election technology is fine for Virginia, but can also be leveraged by other states and localities.
After many months of work on this and other new election technologies put into practical use, we have many more stories to tell, and more detail to provide. But I think that if you follow along and see the steps so far, you may just see a path towards these election infrastructure gaps getting bridged, and flexibly enough to stay bridged. It's not a short path, but the benefits could be great: elections where LEOs have the infrastructure to work with excellence in demanding situations, and can tangibly show the public that they can trust the election as having been accessible to all who are eligible to vote, performed with integrity, and yielding an accurate result.
Slate Magazine posted an article this week, which in sum and substance suggests that trade secret law makes it impossible to independently verify that voting machines are working correctly. In a short, we say, "Really, and is this a recent revelation?" Of course, those who have followed the TrustTheVote Project know that we've been suggesting this in so many words for years. I appreciate that author David Levine refers to elections technology as "critical infrastructure." We've been suggesting the concept of "critical democracy infrastructure" for years.
To be sure, I'm gratified to see this article appear, particularly as we head to what appears to be the closest presidential election since 2000. The article is totally worth a read, but here is an excerpt worth highlighting from Levine's essay:
The risk of the theft (known in trade secret parlance as misappropriation) of trade secrets—generally defined as information that derives economic value from not being known by competitors, like the formula for Coca-Cola—is a serious issue. But should the “special sauce” found in voting machines really be treated the same way as Coca-Cola’s recipe? Do we want the source code that tells the machine how to register, count, and tabulate votes to be a trade secret such that the public cannot verify that an election has been conducted accurately and fairly without resorting to (ironically) paper verification? Can we trust the private vendors when they assure us that the votes will be assigned to the right candidate and won’t be double-counted or simply disappear, and that the machines can’t be hacked?
Well, we all know (as he concludes) that all of the above have either been demonstrated to be a risk or have actually transpired. The challenge is that the otherwise legitimate use of trade secret law ensures that the public has no way to independently verify that voting machinery is properly functioning, as was discussed in this Scientific American article from last January (also cited by Levine.)
Of course, what Levine is apparently not aware of (probably our bad) is that there is an alternative approach on the horizon, regardless of whether the government ever determines a way to "change the rules" for commercial vendors of proprietary voting technology with regard to ensuring independent verifiability.
As a recovering IP lawyer, I'll add one more thing we've discussed within the TrustTheVote Project and the Foundation for years: this is a reason that patents -- including business method patents -- are arguably helpful. Patents are about disclosure and publication, trade secrets are, be definition, not. Of course, to be sure, a patent alone would not be sufficient because within the intricacies of a patent prosecution there is an allowance that only requires partial disclosure of software source code. Of course, "partial disclosure" must meet a test of sufficiency for one "reasonably skilled in the art" to "independently produce the subject matter of the invention." And therein lies the wonderful mushy grounds on which to argue a host of issues if put to the test. But ironically, the intention of partial code disclosure is to protect trade secrets while still facilitating a patent prosecution.
That aside, I also note that in the face of all the nonsense floating about in the blogosphere and other mainstream media whether about charges of Romney's ownership interest in voting machinery companies being a pathway to steal an election or suggesting a Soros-Spanish based voting technology company's conspiracy to deliver tampered tallies, Levine's article is a breath of fresh air deserving the attention ridiculously lavished on these latest urban myths.
Strap in... T-12 days. I fear a nail biter from all view points.
On the eve of 2012 we so need to check in here and let you know we're still fighting the good fight and have been totally distracted by a bunch of activities. There is much to catch you up on and we'll start doing that in the ensuing days, but for now we simply wanted to check in and wish everyone a peaceful and prosperous new year. And of course, we intend that to "prosper" is to enrich yourself in any number of ways, not simply financially, but intellectually, physically, and spiritually as well... how ever you chose to do so ;-)
Looking back while looking ahead, as this afternoon before the new year urges us all to do, we are thankful for the great headway we made in 2011 (and we'll have much more to say about those accomplishments separately), and we are energized (and resting up) for the exciting and intense election year ahead. And that brings me to two thoughts I want to share as we approach the celebration of this New Year's Eve 2011.
1. A Near #FAIL
First, if there was one effort or project that approached "#fail" for us this year it was our intended work to produce a new open data, open source elections night reporting system for Travis County, TX, Orange County, CA and others. We were "provisionally chosen" by Travis County pending our ability to shore up a gap in the required funding to complete some jurisdiction specific capabilities.
We approached prospective backers in addition to our current ones and unfortunately we could not get everyone on board quickly enough, and tried to do so on the eve of their budgetary commitments being finalized for other 2012 election year funding commitments, mostly around voter enfranchisement (more on that in a moment.) We were short answers to 2 questions of Travis County, the answers to which well could have dramatically reduced the remaining fund gap requirement and allowed us to accelerate toward final selection and be ready in time for 2012.
For unexplained reasons, Travis County has fallen silent to answer any of our questions, respond to any of our inquiries, or even continue to advance our discussions. We fear that something has happened in their procurement process and they simply haven't gotten around to the courtesy of letting us know. This is frustrating because we've been left in a state of purgatory -- really unable to determine where and how to allocate resources without this resolved. The buck stops with me (Gregory) on this point as I should've pushed harder for answers from both sides: Travis on the technical issues and our Backers on the funding question.
I say this was a "near #fail" because it clearly is unresolved: we know Orange County, as well as other jurisdictions, and media channels such as the AP remain quite keen on our design, the capabilities for mobile delivery, the open data, and of course the open source alternative to expensive (on a total cost of ownership or "TCO" basis) proprietary black-box solutions. Moreover, the election night reporting system is a "not insignificant" component to our open source elections technology framework, and its design and development will continue. And perhaps we'll get some clarity on Travis County, close the funding gap, and get that service launched in time for next Fall's election frenzy. Stay tuned.
So, that is but one of several distractions that allowed this vital blog to sit idle for the last half of summer and all of the Fall. We'll share more about the other distractions in upcoming posts as we get underway with 2012. But I have a closing comment about the 2012 election season in this final evening of 2011.
2. The 2012 Battles on the Front-lines of Democracy Will Start at the Polling Place
Millions of additional Americans will be required to present photo ID when they arrive at the polls in four states next year. Kansas, Rhode Island, Tennessee and Texas will require voters to prove their identities, bringing the total number of States to 30 that require some form of voter identification, this according to the National Conference of State Legislatures.
This is an issue that has reached the boiling point and we predict will set off a storm of lawsuits (and they are happening already). It ranks very close to redistricting in terms of its impact on voter enfranchisement according to one side of the argument. Opponents also argue that such regulations impose an unfair barrier to those who are less likely to have photo IDs, including the poor and the elderly. The proponents stand steadfast that the real issue is voter fraud and this is the best way to address it. Of course, the trouble with that argument is that after a five-year U.S. DoJ probe lasting across two different administrations found little (53 cases) discernible evidence of widespread voter fraud. And yet, there are also reasonable arguments suggesting that regardless of voter fraud, there seems to be no difficulty in our elderly, disabled or poor obtaining ID cards (where required) in order to enable them to obtain Medicare, Medicaid and food stamps.
To be clear: the Foundation has no opinion on the matter of voter ID. We see arguments on both sides. Our focus is simply this: any voter identification process must be fair, not burdensome, transparent, and uniformly applied. We're far more vested in how to make technology to facilitate friction-free access to the polling place that produces a verifiable, audit-ready, and accountable paper trail for all votes. We do believe that implementing voter ID as a means to restrict the vote is troublesome... as troublesome as preventing voter ID in order to passively enable those who are not entitled as a matter of citizenship to cast a ballot.
Regardless of how you come down on this issue, we believe it will be where the battles begin in the 2012 election season over enfranchising or disenfranchising voters begins.
And with that, we say, 2012: bring it. We're ready. Be there: its going to be an interesting experience. Here we go. Cheers Greg
In a recent posting, I recalled the old-fashioned traditional proprietary-IT-think of vendors leveraging their proprietary data for their customers, and contrasted that with election technology where the data is public. In the "open data" approach, you do not need to have integrated reporting features as part of a voting system or election management system. Instead, you can choose your own reporting system, hook it up to your open database of election data, and mine that data for whatever reports you want. And if you need help, only a few days of a reporting-systems consultant can get you set up quite quickly. The same applies to what we used to call "ad hoc querying" in the olden enterprise IT days, and now might be "data mining". Well, every report is the result doing one or more database queries, and formatting the results. When you can do ad hoc creation of new report template, then an ad hoc query is really just a new report. With the open-data approach, there is no need to buy any additional "modules" from a voting system vendor in order to be able to do querying, reporting, or data mining. Instead, you have ready access to the data with whatever purpose-built tools you choose.
Today, I want to underline that point as applied to mobility, that is, the use of apps on mobile devices (tablets, smart phones, etc.) to access useful information in a quick and handy on-the-go small-screen form factor. Nowadays, lots of folks want "an app for that" and election officials would like to be able to provide. But the options are not so good. A proprietary system vendor may have an app, but it might not be what you had in mind; and you can't alter it. You might get a friendly government System Integrator to crack open your proprietary voting system data and write some apps for you, but that is not a cheap route, either.
What, in contrast, is the open route? It might seem a detour to get you where you want to go, but consider this. With open data, there is no constraint on how you use it, or what you use it with. If you use an election management system that has a Web services API, you can publish all that data to the whole world in a way that anyone's software can access it-- including mobile apps-- including all the data, not just what happens to be available in proprietary product's Web interface. That's not just open-source and "open data" but also "complete data."
Then for some basic apps, you can get friendly open-gov techies to make something simple but effective for starters, and make the app open source. From there on out, it is up to the ingenuity of the tens of thousands of mobile app tinkerers and good government groups (for an example, read about one of them here, and then try it the app yourself) to come up great ideas about how to present the data -- and the more options there are, the more election data, the public's data, gets used for the public good.
I hope that that picture sounds more appealing than closed systems. But to re-wind to Proprietary Election Technology Vendors' (PETV) offerings to Local Election Officials (LEO), consider this dialogue as the alternative to "open data, complete data."
LEO: I'd like to get an election data management solution with flexible reporting, ad hoc querying, a management dashboard, a nifty graphical public Web interface, and some mobile apps.
PETV: Sure, we can provide it. We have most of that off the shelf, and we can do some customization work and professional services to tailor it to your needs. Just guessing from you asked for, that will be $X for the software license, $Y per year for support, $Z for the customization work, and we'll need to talk about yearly support for the custom stuff.
LEO: Hmmm. Too much for me. Bummer.
PETV: Well, maybe we can cut you a special deal, especially if you lower your sights on that customization stuff.
LEO: Hmmm. Then I'm not really getting all I asked for, but I am getting something I can afford. ... But will you all crack open your product's database with a Web services API so that anybody can write a mobile app for it, for any mobile device in the world?
PETV: Wow! That would be some major customization. I think you'll find our mobile app is just fine.
LEO: What about cracking open the database so I can use my choice of reporting tools?
PETV: Ah, no, actually, and I think you'll find our reporting features are really great.
I'll stop the dialogue (now getting painful to listen to) and actually stop altogether for today, leaving the reader to contrast it with the open-data, complete-data approach of an open election data management system with core functions and features, basic reporting, basic mobility, and above all the open-ness for anyone to data-mine or mobilize the election data that is, in fact, the people's information.
During some recent election technology adoption discussions, I've realized how some standard proprietary-IT-think has affected acquisitions of election technology. And it is a mind-set that I used to have too, back when I was in the enterprise IT infrastructure business. Back then, the normal thing was to have a core technology with some primary value, a road map of a couple major extensions of the core technology, and a product roadmap for adding functions and features. Of course we wanted our customers to want more of our stuff as time went by, and we wanted to support our pricing model with customer options for this growing set of features.
And one more-or-less knee-jerk response was an expanding feature set for "reporting." The idea was familiar: the vendor lets you, the customer, use their software; the software builds up a valuable base of information (a proprietary information base) about its history of use and what it can tell you about your IT usage; so the software should be able to prepare you reports that tell you various kinds of juicy information nuggets. And the big assumption was that only that software had the smarts to do so.
And that went double for the cases where a few "reports" were small enough in scope but commonly enough used that it was better to present a handful of them as graphics on a single administrative screen. Thus, the "management dashboard" and new spin on higher product value.
Rewind to the present day, and I found it curious that this mindset is still around, including among adopters of election technology. But in election-land, there is huge missing concept here: Inside of election technology, the data is not proprietary, not specific to a vendor. Sure, a closed system vendor may make data format(s) proprietary, but the data of elections, contests, candidates, ballots, voters, vote totals -- all that and more is by rights public data.
Now, here is the "open" factor: In an open system, all that public data is freely available. Anyone, or anyone's code, can access the data. Take the example of the TTV Election Manager and TTV Tabulator working to consolidate vote counts. The Election Manager's database is an ordinary database with a public schema. If an election official wants some specific reports generated, it is only one option to ask for Election Manager or Tabulator features to slice and dice the data and prepare nifty tables and graphics. And it is tempting to want that in the same Web application interface of the Election Manager. That temptation is underlined because existing proprietary EMSs do have the "you can only get reports from me" concept -- though seemingly to not able please all users with one set of limited reporting features.
But a better option is to recognize that all the data is there already, sitting in a publicly documented database which can be accessed directly by any purpose-built reporting system. Get the reporting system of your choice -- there are tons of them ranging from the grand-daddy of them all Crystal Reports (now offered by software giant SAP) to the reporting offering of venerable open-source project GNU. Hook up the reporting system to your database of election data (yes, that can be a real election management database in the picture above), and design and generate reports to your hearts' content. And even better: a purpose-built reporting package probably has many more handy features than either a product manager or a customer of a voting system product would think of.
And that's the power of "open data," using the best tool for each job -- an election data management system to manage election data, a voting system to collect votes, and a reporting system to generate a wide variety of customizable reports. And that power creates options and trade-offs, which are essential in funding-constrained U.S. election-land. It's tempting to want one vendor to have a completely integrated product of everything, but it may well be more cost-effective -- and ultimately more useful -- to have a collection of packages each of which has your best bang-for-the-buck for each task you need automated.
PS: Next time on "Detours" -- mobile computing as another example of a detour from traditional proprietary-IT-think in election-land.