Evaluation in the Post-Data Age: What Evaluators Can Learn from the 2012 Presidential Election

Stop me if you’ve heard this one before.  An evaluator uses data to assess the effectiveness of a program, arrives at a well-reasoned but disappointing conclusion, and finds that the conclusion is not embraced—perhaps ignored or even rejected—by those with a stake in the program.

People—even evaluators—have difficulty accepting new information if it contradicts their beliefs, desires, or interests.  It’s unavoidable.  When faced with empirical evidence, however, most people will open their minds.  At least that has been my experience.

During the presidential election, reluctance to embrace empirical evidence was virtually universal.  I began to wonder—had we entered the post-data age?

The human race creates an astonishing amount of data—2.5 quintillion bytes of data per day.  In the last two years, we created 90% of all data created throughout human history.

In that time, I suspect that we also engaged in more denial and distortion of data than in all human history.

The election was a particularly bad time for data and the people who love them—but there was a bright spot.

On election day I boarded a plane for London (after voting, of course).  Although I had no access to news reports during the flight, I already knew the result—President Obama had about an 84% chance of winning reelection.  When I stepped off the plane, I learned he had indeed won.  No surprise.

How could I be so certain of the result when the election was hailed as too close to call?  I read the FiveThiryEight blog, that’s how.  By using data—every available, well-implemented poll—and a strong statistical model, Nate Silver was able to produce a highly credible estimate of the likelihood that one or the other candidate would win.

Most importantly, the estimate did not depend on the analysts’—or anyone’s—desires regarding the outcome of the election.

Although this first-rate work was available to all, television and print news was dominated by unsophisticated analysis of poll data.  How often were the results of an individual poll—one data point—presented in a provocative way and its implications debated for as long as breath and column inches could sustain?

Isn’t this the way that we interpret evaluations?

News agencies were looking for the story.  The advocates for each candidate were telling their stories.  Nothing wrong with that.  But when stories shape the particular bits of data that are presented to the public, rather than all of the data being used to shape the story, I fear that the post-data age is already upon us.

Are evaluators expected to do the same when they are asked to tell a program’s story?

It has become acceptable to use data poorly or opportunistically while asserting that our conclusions are data driven.  All the while, much stronger conclusions based on better data and data analysis are all around us.

Do evaluators promote similar behavior when we insist that all forms of evaluation can improve data-driven decision making?

The New York Times reported that on election night one commentator, with a sizable stake in the outcome, was unable to accept that actual voting data were valid because they contradicted the story he wanted to tell.

He was already living in the post-data age.  Are we?


Filed under Commentary, Evaluation, Evaluation Quality, Program Evaluation

6 responses to “Evaluation in the Post-Data Age: What Evaluators Can Learn from the 2012 Presidential Election

  1. I think there are examples of theories that try to address these issues. e.g., Fuzzy-Trace Theory. “Intuition is adaptive and even advanced, but nevertheless imperfect in systematic ways.” https://vlab2.gsb.columbia.edu/decisionsciences.columbia.edu/uploads/File/Articles/reyna1.pdf

  2. Nice post, question: Did I miss the data age or did we just skip from the pre-data age to the post-data age? Is the difference just that we have data now and did not before?

    • Chris,

      We certainly have more data now than in the past, but that just adds insult to injury. In the post-data age, we use data to convince ourselves and others of what we want to be true rather than use data to discover what may be true. Data are weapons, not tools.

  3. One lesson for evaluation that could be learned from this election is that, to get good data for predicting and making decisions about the future, we need to look beyond the surface data and statistics. We need to understand the deep contextual forces involved and how they affect what we see on the surface.

    In this presidential election, Allan Lichtman successfully predicted the winner of the popular vote for the 8th straight time. And he made his prediction in 2010, before the Republican nominee was chosen. How? NPR reported, “…he happened to be sitting down at dinner next to a geophysicist. And they were talking about how earthquakes form, that they’re driven by these faults that are deep under the surface of the Earth. And they said: Is it possible elections work exactly the same way? ….”

    For whatever reason, the major media often tend to report things as unknowable matters of opinion and endless debate when, like you said, “All the while, much stronger conclusions based on better data and data analysis are all around us.”


    • Context is everything. We can account for it in many ways reasonably well, and many more ways poorly. There are incentives to do so poorly–it’s not merely a lack of technical understanding. I fear these incentives are insensitive to our negative assessment of them.

  4. Pingback: Things worth reading | kimberly bowman

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s