In a previous post I said that our ENRS project is basically an effort to investigate a set of assumptions about how the reporting of election results can be transformed with innovations right at the source -- in the hands of the local election officials who manage the elections that create the data. One of those assumptions is that we -- and I am talking about election technologists in a broad community, not only the TrustTheVote Project -- can make election data standards that are important in five ways:
- Flexible to encompass data coming from a variety of elections organizations nationwide.
- Structured to accommodate the raw source data from a variety of legacy and/or proprietary systems, feasibly translated or converted into a standard, common data format.
- Able to simply express the most basic results data: how many votes each candidate received.
- Able to express more than just winners and losers data, but nearly all of the relevant information that election officials currently have but don't widely publish (i.e., data on participation and performance).
- Flexible to express detailed breakdowns of raw data, into precinct-level data views, including all the relevant information beyond winners and losers.
Hmm. It took a bunch of words to spell that out, and for everyone but election geeks it may look daunting. To simplify, here are three important things we're doing to prove out those assumptions to some extent.
- We're collecting real election results data from a single election (November, 2012) from a number of different jurisdictions across the country, together with supporting information about election jurisdictions' structure, geospatial data, registration, participation, and more.
- We're learning about the underlying structure of this data in its native form, by collaborating with the local elections organizations that know it best.
- We're normalizing the data, rendering it in a standard data format, and using software to crunch that data, in order to present it in a digestible way to regular folks who aren't "data geeks."
And all of that comprises one set of assumptions we're working on; that is, we're assuming all of these activities are feasible and can bear fruit in an exploratory project. Steady as she goes; so far, so good.