Category Archives: Evaluation

What the Hell is Quality?

In Zen and the Art of Motorcycle Maintenance, an exasperated Robert Pirsig famously asked, “What the hell is quality?” and expended a great deal of energy trying  to work out an answer.  As I find myself considering the meaning of quality evaluation, the theme of the upcoming 2010 Conference of the American Evaluation Association, it feels like déjà vu all over again.  There are countless definitions of quality floating about (for a short list see Garvin, (1984)), but arguably few if any examples of the concept being applied to modern evaluation practice.  So what the hell is quality evaluation?  And will I need to work out an answer for myself?

Luckily there is some agreement out there.  Quality is usually thought of as an amalgam of multiple criteria, and quality is judged by comparing the characteristics of an actual product or service to those criteria.

Isn’t this exactly what evaluators are trained to do?

Yes.  And judging quality in this way poses some practical problems that will be familiar to evaluators:

Who devises the criteria?
Evaluations serve many often competing interests.  Funders, clients, direct stakeholders, and professional peers make the short list.  All have something to say about what makes an evaluation high quality, but they do not have equal clout.  Some are influential because they have market power (they pay for evaluation services).  Others are influential because they have standing in the profession (they are considered experts or thought leaders).  And as the table below illustrates, some are influential because they have both (funders) and others lack influence because they have neither (direct stakeholders).  More on this in a future blog.

Who makes the comparison?
Quality criteria may be devised by one group and then used by another to judge quality.  For example, funders may establish criteria and then hire independent evaluators (professional peers) who use the criteria to judge the quality of evaluations.  This is what happens when evaluation proposals are reviewed and ongoing evaluations are monitored.  More on this in a future blog.

How is the comparison made?
Comparisons can be made in any number of ways, but (imperfectly) we can lump them into two approaches—the explicit, cerebral, and systemic approach, and the implicit, intuitive, and inconsistent approach.  Individuals tend to judge quality in the latter fashion.  It is not a bad way to go about things, especially when considering everyday purchases (a pair of sneakers or a tuna fish sandwich).  When considering evaluation, however, it would seem best to judge quality in the former fashion.  But is it?  More on this in a future blog.

So what the hell is quality?  This is where I propose an answer that I hope is simple yet covers most of the relevant issues facing our profession.  Quality evaluation is comprised of three distinct things—all important separately, but only in combination reflecting quality.  They are:

Standards
When the criteria used to judge quality come from those with professional standing, the criteria describe an evaluation that meets professional standards.  Standards focus on technical and nontechnical attributes of an evaluation that are under the direct control of the evaluator.  Perhaps the two best examples of this are the Program Evaluation Standards and the Program Evaluations Metaevaluation Checklist.

Satisfaction
When the criteria used to judge quality come from those with market power, the criteria describe an evaluation that would satisfy paying customers.  Satisfaction focuses on whether expectations—reasonable or unreasonable, documented in a contract or not—are met by the evaluator.  Collectively, these expectations define the demand for evaluation in the marketplace.

Empowerment
When the criteria used to judge quality come from direct stakeholders with neither professional standing nor market power, the criteria change the power dynamic of the evaluation.  Empowerment evaluation and participatory evaluation are perhaps the two best examples of evaluation approaches that look to those served by programs to help define a quality evaluation.

Standards, satisfaction, and empowerment are related, but they are not interchangeable.  One can be dissatisfied with an evaluation that exceeds professional standards, or empowered by an evaluation with which funders were not satisfied.  I will argue that the quality of an evaluation should be measured against all three sets of criteria.  Is that feasible?  Desirable?  That is what I will hash out over the next few weeks. 

5 Comments

Filed under Evaluation, Evaluation Quality

The Laws of Evaluation Quality

It has been a while since I blogged, but I was inspired to give it another go by Evaluation 2010, the upcoming annual conference of the American Evaluation Association (November 10-13 in San Antonio, Texas).  The conference theme is Evaluation Quality, something I think about constantly.  There is a great deal packed into those two words, and my blog will be dedicated to unpacking them as we lead up to the November AEA conference.  To kick off that effort, I present a few lighthearted “Laws of Evaluation Quality” that I have stumbled upon over the years.  They poke fun at many of the serious issues I will consider in the upcoming months and that make ensuring the quality of an evaluation a challenge.  Enjoy.

Stakeholder’s First Law of Evaluation Quality
The quality of an evaluation is directly proportional to the number of positive findings it contains.

Corollary to Stakeholder’s First Law
A program evaluation is an evaluation that supports my program

The Converse to Stakeholder’s First Law
The number of flaws in an evaluation’s research design increases without limit with the number of null or negative findings it contains.

Corollary to the Converse of Stakeholder’s First Law
Everyone is a methodologist when their dreams are crushed.

Academic’s First Law of Evaluation Quality
Evaluations are done well if and only if they cite my work.

Corollary to Academic’s First Law
My evaluations are always done well.

Academic’s Lemma
The ideal ratio of publications to evaluations is undefined.

Student’s First Law of Evaluation Quality
The quality of any given evaluation is wholly dependent on who is teaching the class.

Student’s Razor
Evaluation theories should not be multiplied beyond necessity.

Student’s Reality
Evaluation theories will be multiplied far beyond necessity in every written paper, graduate seminar, evaluation practicum, and evening of drinking.

Evaluator’s Conjecture
The quality of any evaluation is perfectly predicted by the brevity of the client’s initial description of the program.

Evaluator’s Paradox
The longer it takes a grant writer to contact an evaluator, the more closely the proposed evaluation approaches a work fiction and the more likely it will be funded.

Evaluator’s Order Statistic
Evaluation is always the last item on the meeting agenda unless you are being fired.

Funder’s Principle of Same Boated-ness
During the proposal process, the quality of a program is suspect.  Upon acceptance, it is evidence of the funder’s social impact.

Corollary to Funder’s Principle
Good evaluations don’t rock the boat.

Funder’s Paradox
When funders request an evaluation that is rigorous, sophisticated, or scientific, they are less likely to read it yet more likely to believe it—regardless of its actual quality.

7 Comments

Filed under Evaluation, Evaluation Quality

Fruitility (or Why Evaluations Showing “No Effects” Are a Good Thing)

sisyphus

The mythical character Sisyphus was punished by the gods for his cleverness.   As mythological crimes go, cleverness hardly rates and his punishment was lenient — all he had to do was place a large boulder on top of a hill and then he could be on his way.

The first time Sisyphus rolled the boulder to the hilltop I imagine he was intrigued as he watched it roll back down on its own.  Clever Sisyphus confidently tried again, but the gods, intent on condemning him to an eternity of mindless labor, had used their magic to ensure that the rock always rolled back down.

Could there be a better way to punish the clever?

Perhaps not. Nonetheless, my money is on Sisyphus because sometimes the only way to get it right is to get it wrong. A lot.

This is the principle of fruitful futility, or as I call it fruitility. Continue reading

Leave a comment

Filed under Commentary, Evaluation, Program Evaluation, Research

It’s a Gift to Be Simple

 simple_logic

Theory-based evaluation acknowledges that, intentionally or not, all programs depend on the beliefs influential stakeholders have about the causes and consequences of effective social action. These beliefs are what we call theories, and they guide us when we design, implement, and evaluate programs.

Theories live (imperfectly) in our minds. When we want to clarify them for ourselves or communicate them to others, we represent them as some combination of words and pictures. A popular representation is the ubiquitous logic model, which typically takes the form of box-and-arrow diagrams or relational matrices.

The common wisdom is that developing a logic model helps program staff and evaluators develop a better understanding of a program, which in turn leads to more effective action.

Not to put too fine a point on it, this last statement is a representation of a theory of logic models. I represented the theory with words, which have their limits, yet another form of representation might reveal, hide, or distort different aspects of the theory. In this case, my theory is simple and my representation is simple, so you quickly get the gist of my meaning. Simplicity has its virtues.

It also has its perils. A chief criticism of logic models is that they fail to promote effective action because they are vastly too simple to represent the complexity inherent in a program, its participants, or its social value. This criticism has become more vigorous over time and deserves attention. In considering it, however, I find myself drawn to the other side of the argument, not because I am especially wedded to logic models, but rather to defend the virtues of simplicity. Continue reading

Leave a comment

Filed under Commentary, Evaluation, Program Evaluation

Data-Free Evaluation

 curves

George Bernard Shaw quipped, “If all economists were laid end to end, they would not reach a conclusion.”  However, economists should not be singled out on this account — there is an equal share of controversy awaiting anyone who uses theories to solve social problems.  While there is a great deal of theory-based research in the social sciences, it tends to be more theory than research, and with the universe of ideas dwarfing the available body of empirical evidence, there tends to be little if any agreement on how to achieve practical results.  This was summed up well by another master of the quip, Mark Twain, who observed that the fascinating thing about science is how “one gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Recently, economists have been in the hot seat because of the stimulus package.  However, it is the policymakers who depended on economic advice who are sweating because they were the ones who engaged in what I like to call data-free evaluation.  This is the awkward art of judging the merit of untried or untested programs. Whether it takes the form of a president staunching an unprecedented financial crisis, funding agencies reviewing proposals for new initiatives, or individuals deciding whether to avail themselves of unfamiliar services, data-free evaluation is more the rule than the exception in the world of policies and programs. Continue reading

3 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research

The Most Difficult Part of Science

tesla

I recently participated in a panel discussion at the annual meeting of the California Postsecondary Education Commission (CPEC) for recipients of Improving Teacher Quality Grants.  We were discussing the practical challenges of conducting what has been dubbed scientifically-based research (SBR).  While there is some debate over what types of research should fall under this heading, SBR almost always includes randomized trials (experiments) and quasi-experiments (close approximations to experiments) that are used to establish whether a program made a difference. 

SBR is a hot topic because it has found favor with a number of influential funding organizations.  Perhaps the most famous example is the US Department of Education, which vigorously advocates SBR and at times has made it a requirement for funding.  The push for SBR is part of a larger, longer-term trend in which funders have been seeking greater certainty about the social utility of programs they fund.

However, SBR is not the only way to evaluate whether a program made a difference, and not all evaluations set out to do so (as is the case with needs assessment and formative evaluation).  At the same time, not all evaluators want to or can conduct randomized trials.  Consequently, the push for SBR has sparked considerable debate in the evaluation community. Continue reading

1 Comment

Filed under Commentary, Evaluation, Program Evaluation

Obama’s Inaugural Address Calls for More Evaluation

obama

Today was historic and I was moved by its import.  As I was soaking in the moment, one part of President Obama’s inaugural address caught my attention.  There has been a great deal of discussion in the evaluation community about how an Obama administration will influence the field.  He advocates a strong role for government and nonprofit organizations that serve the social good, but the economy is weak and tax dollars short.  An oft repeated question was whether he would push for more evaluation or less.  He seems to have provided and answer in his inaugural address:

The question we ask today is not whether our government is too big or too small, but whether it works – whether it helps families find jobs at a decent wage, care they can afford, a retirement that is dignified. Where the answer is yes, we intend to move forward. Where the answer is no, programs will end. And those of us who manage the public’s dollars will be held to account – to spend wisely, reform bad habits, and do our business in the light of day – because only then can we restore the vital trust between a people and their government.”

We have yet to learn Obama’s full vision for evaluation, especially the form it will take and how it will be used to improve government.  But his statement seems to put him squarely in step with the bipartisan trend that emerged in the 1990s and has resulted in more-and more rigorous-evaluation.  President Clinton took perhaps the first great strides in this direction, mandating evaluations of social programs in an effort to promote accountability and transparency.  President Bush went further when many of the agencies under his charge developed a detailed (and controversial) working definition of evaluation as scientifically-based research.  What will be Obama’s next step?  Only time will tell.

2 Comments

Filed under Commentary, Evaluation, Program Evaluation

Theory Building and Theory-Based Evaluation

einstein_tbe

When we are convinced of something, we believe it. But when we believe something, we may not have been convinced. That is, we do not come by all our beliefs through conscious acts of deliberation. It’s a good thing, too, for if we examined the beliefs underlying our every action we wouldn’t get anything done.

When we design or evaluate programs, however, the beliefs underlying these actions do merit close examination. They are our rationale, our foothold in the invisible; they are what endow our struggle to change the world with possibility. Continue reading

2 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research

Should We Fear Subjectivity?

ringsmedium1

Like many this summer, I found myself a bit perplexed by the way Olympic athletes in many sports received scores. It was not so much the scoring systems per se that had me flummoxed, although they were far from simple. Rather it was realizing that, while the systems for scoring gymnastics, ice skating, boxing, and sailing had been overhauled over the past few years in an effort to remedy troubling flaws, the complaint that these scores are subjective — and by extension unfair — lingered.

This dissatisfaction reflects an unwritten rule that applies to our efforts to evaluate the quality or merit of any human endeavor: if the evaluation is to be perceived as fair, it must demonstrate that it is not subjective. But is this a useful rule? Before we can wrestle with that question, we need to consider what we mean by subjective and why we feel compelled to avoid it. Continue reading

Leave a comment

Filed under Commentary, Evaluation, Program Evaluation

Randomized Trials: Old School, New Trend

surfers

To my mind, surfing hit its peak in the 1950s when relatively light longboards first became available.

Enthusiastic longboarders still ride the waves, of course, but their numbers have dwindled as shorter more maneuverable boards became more fashionable. Happily, longboards are now making a comeback, mostly because they possess a property that shortboards do not: stability. With a stable board novices can quickly experience the thrill of the sport and experts can show off skills like nose walks, drag turns, and tandem riding that are unthinkable using today’s light-as-air shortboards.

The new longboards are different — and, I think, better — because their designs take advantage of modern materials and are more affordable and easier to handle than their predecessors. It just goes to show that everything old becomes new again, and with renewed interest comes the opportunity for improvement.

The same can be said for randomized trials (RTs). They were introduced to the wider field of social sciences in the 1930s, about the time that surfing was being introduced outside of Hawaii. RTs became popular through the 1950s, at least in concept because they can be challenging and expensive to implement. During the 60s, 70s and 80s, RTs were supplanted by simpler and cheaper types of evaluation. But a small and dedicated cadre of evaluators stuck with RTs because of a property that no other form of evaluation has: strong internal validity. RTs make it possible to ascertain with a high degree of certainty — higher than any other type of evaluation — whether a program made a difference. Continue reading

2 Comments

Filed under Commentary, Evaluation, Program Evaluation, Research