Category Archives: Commentary

Obama’s Inaugural Address Calls for More Evaluation

obama

Today was historic and I was moved by its import.  As I was soaking in the moment, one part of President Obama’s inaugural address caught my attention.  There has been a great deal of discussion in the evaluation community about how an Obama administration will influence the field.  He advocates a strong role for government and nonprofit organizations that serve the social good, but the economy is weak and tax dollars short.  An oft repeated question was whether he would push for more evaluation or less.  He seems to have provided and answer in his inaugural address:

The question we ask today is not whether our government is too big or too small, but whether it works – whether it helps families find jobs at a decent wage, care they can afford, a retirement that is dignified. Where the answer is yes, we intend to move forward. Where the answer is no, programs will end. And those of us who manage the public’s dollars will be held to account – to spend wisely, reform bad habits, and do our business in the light of day – because only then can we restore the vital trust between a people and their government.”

We have yet to learn Obama’s full vision for evaluation, especially the form it will take and how it will be used to improve government.  But his statement seems to put him squarely in step with the bipartisan trend that emerged in the 1990s and has resulted in more-and more rigorous-evaluation.  President Clinton took perhaps the first great strides in this direction, mandating evaluations of social programs in an effort to promote accountability and transparency.  President Bush went further when many of the agencies under his charge developed a detailed (and controversial) working definition of evaluation as scientifically-based research.  What will be Obama’s next step?  Only time will tell.

2 Comments

Filed under Commentary, Evaluation, Program Evaluation

Theory Building and Theory-Based Evaluation

einstein_tbe

When we are convinced of something, we believe it. But when we believe something, we may not have been convinced. That is, we do not come by all our beliefs through conscious acts of deliberation. It’s a good thing, too, for if we examined the beliefs underlying our every action we wouldn’t get anything done.

When we design or evaluate programs, however, the beliefs underlying these actions do merit close examination. They are our rationale, our foothold in the invisible; they are what endow our struggle to change the world with possibility. Continue reading

2 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research

Should We Fear Subjectivity?

ringsmedium1

Like many this summer, I found myself a bit perplexed by the way Olympic athletes in many sports received scores. It was not so much the scoring systems per se that had me flummoxed, although they were far from simple. Rather it was realizing that, while the systems for scoring gymnastics, ice skating, boxing, and sailing had been overhauled over the past few years in an effort to remedy troubling flaws, the complaint that these scores are subjective — and by extension unfair — lingered.

This dissatisfaction reflects an unwritten rule that applies to our efforts to evaluate the quality or merit of any human endeavor: if the evaluation is to be perceived as fair, it must demonstrate that it is not subjective. But is this a useful rule? Before we can wrestle with that question, we need to consider what we mean by subjective and why we feel compelled to avoid it. Continue reading

Leave a comment

Filed under Commentary, Evaluation, Program Evaluation

Asking Questions, Getting Answers

Douglas Adams

Douglas Adams

The late satirical author Douglas Adams spun a yarn about a society determined to discover the meaning of life. After millennia, they had developed a computer so powerful it could provide the answer. Gathering around on that long anticipated day, the people waited for the computer to reveal the answer. It was 42. Puzzled and more than a little angry, the people wanted to know how this could be. The computer responded that the answer to the big question of the meaning of life, the universe and everything was most definitely 42, but as it unfortunately turned out neither the computer nor the people knew exactly what the question was.

A similar fate can befall evaluations, more than one of which has produced a precise answer to a question never framed or a question framed so vaguely as to be useless. It is easy enough to avoid this fate when you realize that, at its most basic level, evaluations address only three big questions: Can it work? Did it work? Will it work again? We call them “Can,” “Do,” and “Will” for short. Of course, we can ask other questions, but they tend to be in support of or in response to the big three. What good is asking, for example, “How does it work?” before you believe that it can, did or will? Continue reading

Leave a comment

Filed under Commentary, Elements of Evaluation, Research

Randomized Trials: Old School, New Trend

surfers

To my mind, surfing hit its peak in the 1950s when relatively light longboards first became available.

Enthusiastic longboarders still ride the waves, of course, but their numbers have dwindled as shorter more maneuverable boards became more fashionable. Happily, longboards are now making a comeback, mostly because they possess a property that shortboards do not: stability. With a stable board novices can quickly experience the thrill of the sport and experts can show off skills like nose walks, drag turns, and tandem riding that are unthinkable using today’s light-as-air shortboards.

The new longboards are different — and, I think, better — because their designs take advantage of modern materials and are more affordable and easier to handle than their predecessors. It just goes to show that everything old becomes new again, and with renewed interest comes the opportunity for improvement.

The same can be said for randomized trials (RTs). They were introduced to the wider field of social sciences in the 1930s, about the time that surfing was being introduced outside of Hawaii. RTs became popular through the 1950s, at least in concept because they can be challenging and expensive to implement. During the 60s, 70s and 80s, RTs were supplanted by simpler and cheaper types of evaluation. But a small and dedicated cadre of evaluators stuck with RTs because of a property that no other form of evaluation has: strong internal validity. RTs make it possible to ascertain with a high degree of certainty — higher than any other type of evaluation — whether a program made a difference. Continue reading

2 Comments

Filed under Commentary, Evaluation, Program Evaluation, Research