Evaluator, Watch Your Language

As I was reading a number of evaluation reports recently, the oddity of evaluation jargon struck me.  It isn’t that we have unusual technical terms—all fields do—but that we use everyday words in unusual ways.  It is as if we speak in a code that only another evaluator can decipher.

I jotted down five words and phrases that we all use when we speak and write about evaluation.  On the surface, their meanings seem perfectly clear.  However, they can be used for good and bad.  How are you using them?

(1) Suggest

As in: The data suggest that the program was effective.

Pros: Suggest is often used to avoid words such as prove and demonstrate—a softening of this is so to this seems likely.  Appropriate qualification of evaluation results is desirable.

Cons: Suggest is sometimes used to inflate weak evidence.  Any evaluation—strong or weak—can be said to suggest something about the effectiveness of a program. Claiming that weak evidence suggests a conclusion overstates the case.

Of special note:  Data, evaluations, findings, and the like cannot suggest anything.  Authors suggest, and they are responsible for their claims.

(2) Mixed Methods

As in: Our client requested a mixed-methods evaluation.

Pros: Those who focus on mixed methods have developed thoughtful ways of integrating qualitative and quantitative methods.  Thoughtful is desirable.

Cons: All evaluations use some combination of qualitative and quantitative methods, so any evaluation can claim to use—thoughtfully or not—a mixed-methods approach.  A request for a mixed-methods evaluation can mean that clients are seeking an elusive middle ground—a place where qualitative methods tell the program’s story in a way that insiders find convincing and quantitative methods tell the program’s story in a way that outsiders find convincing.  The middle ground frequently does not exist.

(3) Know

As in: We know from the literature that teachers are the most important school-time factor influencing student achievement.

Pros: None.

Cons: The word know implies that claims to the contrary are unfounded.  This shuts down discussion on topics for which there is almost always some debate.  One could argue that the weight of evidence is overwhelming, the consensus in the field is X, or we hold this belief as a given.  Claiming that we know, with rare exception, overstates the case.

(4) Nonetheless [we can believe the results]

As in: The evaluation has flaws, nonetheless it reaches important conclusions.

Pros: If the phrase is followed by a rationale (…because of the following reasons…), this turn of phrase might indicate something quite important.

Cons: All evaluations have flaws, and it is the duty of evaluators to bring them to the attention of readers.  If the reader is then asked to ignore the flaws, without being given a reason, it is at best confusing and at worst misleading.

(5) Validated Measure

As in: We used the XYZ assessment, a previously validated measure.

Pros: None

Cons: Validity is not a characteristic of a measure. A measure is valid for a particular group of people for a particular purpose in a particular context at a specific point in time.  This means that evaluators must make the case that all of the measures that they used were appropriate in the context of the evaluation.

The Bottom Line

I am guilty of sometimes using bad language.  We all are.  But language matters, even in causal conversations among knowledgeable peers.  Bad language leads to bad thinking, as my mother always said.  So I will endeavor to watch my language and make her proud.  I hope you will too.

12 Comments

Filed under Evaluation, Evaluation Quality, Program Evaluation

Conference Blog: The Harvard Social Enterprise Conference (Day 2)

What follows is a second series of short posts written while I attended the Social Enterprise Conference (#SECON12).  The conference (February 25-26) was presented by the Harvard Business School and the Harvard Kennedy School.

I spent much of the day attending a session entitled ActionStorm: A Workshop on Designing Actionable InnovationsSuzi Sosa, Executive Director of the Dell Social Innovation Challenge did a great job of introducing a design thinking process for those developing new social enterprises.  I plan to blog more about design thinking in a future post.

The approach presented in the workshop combined basic design thinking activities (mind mapping, logic modeling, and empathy mapping) that I believe can be of value to program evaluators as well as program designers.

I wonder, however, how well these methods fit the world of grant-funded programs.  Increasingly, the guidelines for grant proposals put forth by funding agencies specify the core elements of a program’s design.  It is common for funders to specify the minimum number of contact hours, desired length of  service, and required service delivery methods.  When this is the case, designers may have little latitude to innovate, closing off opportunities to improve quality and efficiency.

Evaluation moment #4: Suzi constantly challenged us to specify how, where, and why we would measure the impact of the social enterprises we were discussing.  It was nice to see someone advocating for evaluation “baked into” program designs.  The participants were receptive, but they seemed somewhat daunted by the challenge of measuring impact.

Next, I attended Taking Education Digital: The Impact of Sharing KnowledgeChris Dede (Harvard Graduate School of Education) moderated.  I have always found his writing insightful and thought provoking, and he did not disappoint today. He provided a clear, compelling call for using technology to transform education.

His line of reasoning, as I understand it, is this: the educational system, as currently structured, lacks the capacity to meet federal and state mandates to increase (1) the quality of education delivered to students and (2) desired high school and college graduation rates.  Technology can play a transformational role by increasing the quality and capacity of the educational system.

Steve Carson followed by describing his work with MIT OpenCourseWare, which illustrated very nicely the distinction between innovation and transformation.  MIT OpenCourseWare was, at first, a humble idea–use the web to make it easier for MIT students and faculty to share learning-related materials.  Useful, but not innovative (as the word is typically used).

It turned out that the OpenCourseWare materials were being used by a much larger, more diverse group of formal and informal learners for wonderful, unanticipated educational purposes.  So without intending, MIT had created a technology with none of the trappings of innovation yet tremendous potential to be transformational.

The moral of the story: social impact can be achieved in unexpected ways, and in cultures that value innovation, the most unexpected way is to do something unexceptional exceptionally well.

Next, Chris Sprague (OpenStudy) discussed his social learning start up.  OpenStudy connects students to each other–so far 150,000 from 170 countries–in ways that promote learning.  Think of it as a worldwide study hall.

Social anything is hot in the tech world, but this is more than Facebook dressed in a scholar’s robe.  The intent is to create meaningful interactions around learning, tap expertise, and spark discussions that build understanding.  Think about how much you can learn about a subject simply by having a cup of coffee with an expert.  Imagine how much more you could learn if you were connected to more experts and did not need to sit next to them in a cafe to communicate.

The Pitch for Change took place in the afternoon.  It was the culmination of a process in which young social entrepreneurs give “elevator pitches” describing new ventures.  Those with the best pitches are selected to move on to the next round, where they make another pitch.

To my eyes, the final round combined the most harrowing elements of job interviews and Roman gladiatorial games–one person enters the arena, fights for survival for three minutes, and then looks to the crowd for thumbs up or down (see the picture at the top of this entry). Of course, they don’t use thumbs–that would too BC (before connectivity).  Instead, they use smartphones to vote via the web.

At the end, the winners were given big checks (literally, the checks were big; the dollar amounts, not so much).

But winners receive more than a little seed capital.  The top two winners are fast-tracked to the semifinal round of the 2013 Echoing Green Fellowship, the top four winners are fast-tracked to the semifinal round of the 2012 Dell Social Challenge, and the project that best makes use of technology to solve a social or environmental problem wins the Dell Technology Award.  Not bad for a few minutes in the arena.

Afterward, Dr. Judith Rodin, President of the Rockefeller Foundation, made the afternoon keynote speech, which focused on innovation.  She is a very good speaker and the audience was eager to hear about the virtues of new ideas.  It went over well.

Evaluation Moment 5: Dr. Rodin made the case for measuring social impact.  She described it as essential to traditional philanthropy and more recent efforts around social impact investing.  She noted that Rockefeller is developing its capacity in this area, however, evaluation remains a tough nut to crack.

The last session of the day was fantastic–and not just because an evaluator was on the panel.  It was entitled If at First You Don’t Succeed: The Importance of Prototyping and Iteration in Poverty Alleviation.  Prototyping is not just a subject of interest for me, it is a way of life.

Mike North (ReAllocate) discussed how he leverages volunteers–individuals and corporations–to prototype useful, innovative products.  In particular, he described his ongoing efforts to prototype an affordable corrective brace for children in developing countries who are born with clubfoot.  You can learn more about it in this video.

Timothy Prestero (Design that Matters) walked us through the process he used to prototype the Firefly.  About 60% of newborns in developing countries suffer from jaundice, and about 10% of these go on to suffer brain damage or another disability.  The treatment is simple–exposure to blue light.  Firefly is the light source.

What is so hard about designing a lamp that shines a blue light?  Human behavior.

For example, hospital workers often put more than one baby in the same phototherapy device, which promotes infectious disease.  Consequently, Firefly needed to be designed in such a way that only one baby could be treated at a time.  It also needed to be inexpensive in order to address the root cause of the problem behavior–too few devices in hospitals.  Understanding these behaviors, and designing with them in mind, requires lengthy prototyping.

Molly Kinder described her work at Development Innovation Ventures (DIV), a part of USAID.  DIV provides financial and other support to innovative projects selected through a competitive process.  In many ways, it looks more like a new-style venture fund than part of a government agency.  And DIV rigorously evaluates the impact of the projects it supports.

Evaluation moment #5: Wow, here is a new-style funder routinely doing high quality evaluations–including but not limited to randomized control trials–in order t0 scale projects strategically.

Shawn Powers, from the Jameel Poverty Action Lab at MIT (J-PAL), talked about J-PAL’s efforts to conduct randomized trials in developing countries.  Not surprisingly, I am a big fan of J-PAL, which is dedicated to finding effective ways of improving the lives of the poor and bringing them to scale.

Looking back on the day:  The tight connection between design and evaluation was a prominent theme.  While exploring the theme, the discussion often turned to how evaluation can help social enterprises scale up.  It seems to me that we first need t0 scale up rigorous evaluation of social enterprises.  The J-PAL model is a good one, but it isn’t possible for academic institutions to scale up fast enough or large enough to meet the need.  So what do we do?

Leave a comment

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Harvard Social Enterprise Conference (Day 1)

What follows is a series of short posts written while I attended the Social Enterprise Conference (#SECON12).  The conference (February 25-26) was presented by the Harvard Business School and the Harvard Kennedy School.

What is a social enterprise?

The concept of a social enterprise is messy.  By various definitions, it can include:

  • A for-profit company that seeks to benefit society;
  • a nonprofit organization that uses business-like methods;
  • a foundation that employs market investing principles; and
  • a government agency that leverages the work of private-sector partners.

The concept of a social enterprise is disruptive. It blurs the lines separating organizations that do good for stakeholders, do well for shareholders, and do right by constituents.

The concept of a social enterprise is inspiring.  It can foster flexible, creative solutions to our most pressing problems.

The concept of a social enterprise is dangerous.  It can attach the patina of altruism to organizations motivated solely by profits.

The concept of a social enterprise is catching fire.  The evaluation community needs to learn how it fits into this increasingly common type of organization.

The conference started with a young entrepreneurs keynote panel that was moderated by Daniel Epstein (Unreasonable Institute).

Kavita Shukla of Fenugreen discussed the product she invented.  Amazing.  It is a piece of paper permeated with organic, biodegradable herbs.  So what?  It keeps produce fresh 2-4 times longer.  The potential social and financial impact of the product—especially in parts of the world where food is in short supply and refrigeration scarce—is tremendous. Watch a TED talk about it here.

Next, Taylor Conroy (Destroy Normal Consulting) discussed his fundraising platform that allows people to raise $10,000 in three hours for projects like building schools in developing countries.  Sound crazy?  Check it out here and decide for yourself.

Finally, Lauren Bush (FEED Projects) discussed how she has used the sale of FEED bags and other fashion items to provide over 60 million meals for children in need around the world.

Evaluation moment #1: The panelists were asked how they measured the social impact of their enterprises.  Disappointingly, they do not seem to be doing so in a systematic way beyond counting units of service provided or number of products sold—a focus on outputs, not outcomes.

The first session I attended had the provocative title Social Enterprise: Myth or Reality?: Measuring Social Impact and Attracting Capital. Jim Bildner did an outstanding job as moderator.  Panelists included Kimberlee Cornett (Kresge Foundation), Clara Miller (F. B. Heron Foundation), Margaret McKenna (Harvard Kennedy School), and David Wood (Hauser Center for Nonprofit Organizations).

The discussion addressed three questions.

Q: What is social enterprise?

A: It apparently can be anything, but it should be something that is more precisely defined.

Q: How are foundations and financial investors getting involved?

A: By making loans and taking equity stakes in social enterprises.  That promotes social impact through the enterprise and generates more cash to invest in other social enterprises.

Evalution moment #2: Q: How can the social impact of enterprises be measured?

A: It isn’t.  One panelist suggested that measuring social impact is such a tough nut to crack that, if someone could figure out how, it would make for a fantastic new social enterprise.  I was both shocked and flattered, given I have been doing just that for decades.  Why were there no evaluators on this panel?


Ami Dalal and Jo-Ann Tan of Acumen Fund conducted a “bootcamp” on the approach their firm uses to make social investments.  They focused on methods of due diligence and valuation (that is, how they attach a dollar value to a social enterprise).

I found their approach to measuring the economic impact of the their investments very interesting—perhaps evaluators would benefit from learning more about it.  There are details at their website.

Evaluation moment #3

When the topic of measuring the social impact of their investments came up, the presenters provided the most direct answer I have heard so far.  They always measure outputs—those are easy to measure and can indicate if something is going wrong.  In some cases they also measure outcomes (impacts) using randomized control trials.  Given the cost, they do this infrequently.

Looking back on the day

A social enterprise that measures social impact but does not measure financial success would be considered ridiculous.  Yet a social enterprise that measures financial success but does not measure social impact is not.  Why?

2 Comments

Filed under Conference Blog, Evaluation, Program Evaluation

Toward a Taxonomy of Wicked Problems

Program designers and evaluators have become keenly interested in wicked problems.  More precisely, we are witnessing a second wave of interest—one that holds new promise for the design of social, educational, environmental, and cultural programs.

The concept of wicked problems was first introduced in the late 1960s by Horst Rittel, then at UC Berkeley.  It became a popular subject for authors in many disciplines, and writing on the subject grew through the 1970s and into the early 1980s (the first wave).  At that point, writing on the subject slowed until the late 1990s when the popularity of the subject again grew (the second wave).

Here are the results of a Google ngram analysis that illustrates the two waves of interest (click the image to enlarge).

Rittel contrasted wicked problems with tame problems.  Various authors, including Rittel, have described the tame-wicked dichotomy in different ways.  Most are based on the 10 characteristics of wicked problems that Rittel introduced in the early 1970s.  Briefly…

Tame problems can be solved in isolation by an expert—the problems are relatively easy to define, the range of possible solutions can be fully enumerated in advance, stakeholders hold shared values related to the problems and possible solutions, and techniques exist to solve the problems as well as measure the success of implemented solutions.

Wicked problems are better addressed collectively by diverse groups—the problems are difficult to define, few if any possible solutions are known in advance, stakeholders disagree about underlying values, and we can neither solve the problems (in the sense that they can be eliminated) nor measure the success of implemented solutions.

In much of the writing that emerged during the first wave of interest, the tame-wicked dichotomy was the central theme.  It was argued that most problems of interest to policymakers are wicked, which limited the utility of the rational, quantitative, stepwise thinking that dominated policy planning, operations research, and management science at the time.  A new sort of thinking was needed.

In the writing that has emerged in the second wave, that new sort of thinking has been given many names—systems thinking, design thinking, complexity thinking, and developmental thinking, to name a few.  Each, supposedly, can tame what would otherwise be wicked.

Perhaps.

The arguments for “better ways of thinking” are weakened by the assumption that wicked and tame represent a dichotomy.  If most social problems met all 10 of Rittel’s criteria, we would be doomed.  We aren’t.

Social problems are more or less wicked, each in its own way.  Understanding how a problem is wicked, I believe, is what will enable us to think more effectively about social problems and to tame them more completely.

Consider two superficially similar examples that are wicked in different ways.

Contagious disease: We understand the biological mechanisms that would allow us to put an end to many contagious diseases.  In this sense, these diseases are tame problems.  However, we have not been able to eradicate all contagious diseases that we understand well.  The reason, in part, is that many people hold values that conflict with solutions that are, on a biological level, known to be effective.  For example, popular fear of vaccines may undermine the effectiveness of mass vaccination, or the behavioral changes needed to reduce infection rates may clash with local cultures.  In cases such as this, contagious diseases pose wicked problems because of conflicting values.  The design of programs to eradicate these diseases would need to take this source of wickedness into account, perhaps by including strong stakeholder engagement efforts or public education campaigns.

Cancer: We do not fully understand the biological mechanisms that would allow us to prevent and cure many forms of cancer.  At the same time, the behaviors that might reduce the risk of these cancers (such as healthy diet, regular exercise, not smoking, and avoiding exposure to certain chemicals) conflict with values that many people hold (such as the importance of personal freedom, desire for comfort and convenience, and the need to earn a living in certain industrial settings). In these cases, cancer poses wicked problems for two reasons—our lack of understanding and conflicting values.  This may or may not make it “more” wicked than eradicating well-understood contagious diseases; that is difficult to assess.  But it certainly makes it wicked in a different way, and the design of programs to end cancer would need to take that difference into account and address both sources of wickedness.

The two examples above are wicked problems, but they are wicked for different reasons.  Those reasons have important implications for program designers.  My interest over the next few months is to flesh out a more comprehensive taxonomy of wickedness and to unpack its design implications.  Stay tuned.

5 Comments

Filed under Design, Program Design

EVALentine Day (February 15)

Happy EVALentine Day!  On 2/15, share your love of evaluation with others.  Teach someone about evaluation.  Learn something about evaluation.  Let the world know that evaluation has the power to change the world.

Leave a comment

Filed under Evaluation

Should the Pie Chart Be Retired?

The ability to create and interpret visual representations has been an important part of the human experience since we began drawing on cave walls at Chauvet.

Today, that ability—what I call visualcy—has even greater importance.  We use visuals to discover how the world works, communicate our discoveries, plan efforts to improve the world, and document the success of our efforts.

In short, visualcy affects every aspect of program design and evaluation.

The evolution of our common visual language, sadly, has been shaped by the default settings of popular software, the norms of the conference room, and the desire to attract attention.  It is not a language constructed to advance our greater purposes.  In fact, much of our common language works against our greater purposes.

An example of a counterproductive element of our visual language is the pie chart.

Consider this curious example from the New York Times Magazine (1/15/2012).

This pie chart has a humble purpose—summarize reader responses to an article on obesity in the US.  It failed that purpose stunningly.  Here are some reasons why.

(1) Three-dimensionality reduces accuracy: Not only are 3-D graphs harder to read accurately, but popular software can construct them inaccurately.  The problem—for eye and machine—arises from the translation of values in 1-D or 2-D space into values in 3-D space.  This is a substantial problem with pie charts (imagine computing the area of a pie slice while taking its 3-D perspective into account) as well as other types of graph.  Read Stephanie Evergreen’s blog post on the perils the 3-D to see a good example.

(2) Pie charts impede comparisons: People have trouble comparing pie slices by eye.  Think you can? Here is a simple pie chart I constructed from the data in the NYT Magazine graph.  Which slice is larger—orange or the blue?

This is much clearer.

Note that the the the Y axis ranges from 0% to 100%.  That is what makes the bar chart a substitute for the pie chart.  Sometimes the Y axis is truncated innocently to save column inches or intentionally to create a false impression, like this:

Differences are exaggerated and large values seem to be closer to 100% than they really are.  Don’t do this.

(3) The visual theme is distracting: I suspect the NYT Magazine graph is intended to look like some sort of food.  Pieces of a pie? Cake? Cheese?  It doesn’t work.  This does.

Unless you are evaluating the Pillsbury Bake-Off, however, it is probably not an appropriate theme.

(4) Visual differentiators add noise: Graphs must often differentiate elements. A classic example is differentiating treatment and control group averages using bars of different colors.  In the NYT Magazine pie chart, the poor choice of busy patterns makes it very difficult to differentiate one piece of the pie from another.  The visual chaos is reminiscent of the results of a “poll” of Iraqi voters presented by the Daily Show in which a very large number of parties purportedly held almost equal levels of support.

(5) Data labels add more noise: Data labels can increase clarity.  In this case, however, the swarm of curved arrows connecting labels to pieces of the pie adds to the visual chaos.  Even this tangle of labels is better because readers instantly understand that Iraq received a disproportionate amount of the aid provided to many countries.

Do you think I made up these reasons?   Then read this report by RAND that investigated graph comprehension using experimental methods.  Here is a snippet from the abstract:

We investigated whether the type of data display (bar chart, pie chart, or table) or adding a gratuitous third dimension (shading to give the illusion of depth) affects the accuracy of answers of questions about the data. We conducted a randomized experiment with 897 members of the American Life Panel, a nationally representative US web survey panel. We found that displaying data in a table lead [sic] to more accurate answers than the choice of bar charts or pie charts. Adding a gratuitous third dimension had no effect on the accuracy of the answers for the bar chart and a small but significant negative effect for the pie chart.

There you have it—empirical evidence that it is time to retire the pie chart.

Alas, I doubt that the NYT Magazine, infographic designers, data viz junkies, or anyone with a reporting deadline will do that.  As every evaluator knows, it is far easier to present empirical evidence than respond to it.

5 Comments

Filed under Design, Evaluation, Visualcy

A National Holiday and US Postage Stamp for Evaluation

Evaluation is an invisible giant, a profession that impacts society on a grand scale yet remains unseen.  I want to change that.  But what can one person do to raise awareness about evaluation?

Go big.  And you can help.

A National Evaluation Holiday:  With the power vested in me by the greeting card industry, I am declaring…

February 15 is EVALentine’s Day!

This is a day when people around the world share their love of evaluation with each other.  Send a card, an email, or a copy of Guiding Principles for Evaluators to those near and dear, far and wide, internal and external.  Get the word out.  If the idea catches on, imagine how much exposure evaluation would receive.

A US Postage Stamp:  With the power vested in me by stamps.com, I have issued a US postage stamp for EVALentine’s Day.  Other holidays get stamps, why not ours?  The stamp I designed is based on the famous 1973 Valentine’s Day love stamp by Robert Indiana.  Now you can show your love of evaluation on the outside of an EVALentine card as well as the inside.

Here is the best part.

To kickoff EVALentine’s day, I will send an EVALentine’s card and a ready-to-use EVAL stamp to anyone, anywhere in the world.  For free.  Really.

Here is what you need to do.

(1) Visit the Gargani + Company website in the month of February.

(2) Click the Contact link in the upper right corner of the homepage.

(3) Send an email with EVALENTINE in the subject line and a SNAIL MAIL ADDRESS in the body.

(4) NOTE:  This offer is only valid for emails received during the month of February, 2012.

Don’t be left out on EVALentines day.  Drop me an email and get the word out!

12 Comments

Filed under Evaluation, Program Evaluation

The Future of Evaluation: 10 Predictions

Before January comes to a close, I thought I would make a few predictions.  Ten to be exact.  That’s what blogs do in the new year, after all.

Rather than make predictions about what will happen this year—in which case I would surely be caught out—I make predictions about what will happen over the next ten years.  It’s safer that way, and more fun as I can set my imagination free.

My predictions are not based on my ideal future.  I believe that some of my predictions, if they came to pass, would present serious challenges to the field (and to me).  Rather, I take trends that I have noticed and push them out to their logical—perhaps extreme—conclusions.

In the next ten years…

(1) Most evaluations will be internal.

The growth of internal evaluation, especially in corporations adopting environmental and social missions, will continue.  Eventually, internal evaluation will overshadow external evaluation.  The job responsibilities of internal evaluators will expand and routinely include organizational development, strategic planning, and program design.  Advances in online data collection and real-time reporting will increase the transparency of internal evaluation, reducing the utility of external consultants.

(2) Evaluation reports will become obsolete.

After-the-fact reports will disappear entirely.  Results will be generated and shared automatically—in real time—with links to the raw data and documentation explaining methods, samples, and other technical matters.  A new class of predictive reports, preports, will emerge.  Preports will suggest specific adjustments to program operations that anticipate demographic shifts, economic shocks, and social trends.

(3) Evaluations will abandon data collection in favor of data mining.

Tremendous amounts of data are being collected in our day-to-day lives and stored digitally.  It will become routine for evaluators to access and integrate these data.  Standards will be established specifying the type, format, security, and quality of “core data” that are routinely collected from existing sources.  As in medicine, core data will represent most of the outcome and process measures that are used in evaluations.

(4) A national registry of evaluations will be created.

Evaluators will begin to record their studies in a central, open-access registry as a requirement of funding.  The registry will document research questions, methods, contextual factors, and intended purposes prior to the start of an evaluation.  Results will be entered or linked at the end of the evaluation.  The stated purpose of the database will be to improve evaluation synthesis, meta-analysis, meta-evaluation, policy planning, and local program design.  It will be the subject of prolonged debate.

(5) Evaluations will be conducted in more open ways.

Evaluations will no longer be conducted in silos.  Evaluations will be public activities that are discussed and debated before, during, and after they are conducted.  Social media, wikis, and websites will be re-imagined as virtual evaluation research centers in which like-minded stakeholders collaborate informally across organizations, geographies, and socioeconomic strata.

(6) The RFP will RIP.

The purpose of an RFP is to help someone choose the best service at the lowest price.  RFPs will no longer serve this purpose well because most evaluations will be internal (see 1 above), information about how evaluators conduct their work will be widely available (see 5 above), and relevant data will be immediately accessible (see 3 above).  Internal evaluators will simply drop their data—quantitative and qualitative—into competing analysis and reporting apps, and then choose the ones that best meet their needs.

(7) Evaluation theories (plural) will disappear.

Over the past 20 years, there has been a proliferation of theories intended to guide evaluation practice.  Over the next ten years, there will be a convergence of theories until one comprehensive, contingent, context-sensitive theory emerges.  All evaluators—quantitative and qualitative; process-oriented and outcome-oriented; empowerment and traditional—will be able to use the theory in ways that guide and improve their practice.

(8) The demand for evaluators will continue to grow.

The demand for evaluators has been growing steadily over the past 20 to 30 years.  Over the next ten years, the demand will not level off due to the growth of internal evaluation (see 1 above) and the availability of data (see 3 above).

(9) The number of training programs in evaluation will increase.

There is a shortage of evaluation training programs in colleges and universities.  The shortage is driven largely by how colleges and universities are organized around disciplines.  Evaluation is typically found as a specialty within many disciplines in the same institution.  That disciplinary structure will soften and the number of evaluation-specific centers and training programs in academia will grow.

(10) The term evaluation will go out of favor.

The term evaluation sets the process of understanding a program apart from the process of managing a program.  Good evaluators have always worked to improve understanding and management.  When they do, they have sometimes been criticized for doing more than determining the merit of a program.  To more accurately describe what good evaluators do, evaluation will become known by a new name, such as social impact management.

…all we have to do now is wait ten years and see if I am right.

41 Comments

Filed under Design, Evaluation, Program Design, Program Evaluation

Tragic Graphic: The Wall Street Journal Lies with Statistics?

Believe it or not, the Wall Street Journal provides another example of an inaccurate circular graph.  This time the error so closely parallels an example from Darrell Huff’s classic How to Lie with Statistics that I find myself wondering—intentional deception or innocent blunder?

The image above comes from Huff’s book.  The moneybag on the left represents the average weekly salary of carpenters in the fictional country of Rotundia.  The bag on the right, the average weekly salary of carpenters in the US.

Based on the graph, how much more do carpenters in the US earn?  Twice?  Three times?  Four times?  More?

The correct answer is that they earn twice as much, but the graph gives the impression that the difference is greater than that.  The heights of the bags are proportionally correct but their areas are not.  Because we tend to focus on the areas of shapes, graphics like this can easily mislead readers.

Misleading the reader, of course, was Huff’s intention.  As he put it:

…I want you to infer something, to come away with an exaggerated impression, but I don’t want to be caught at my tricks.

What were the intentions of the Wall Street Journal this Saturday (1/21/2012) when it previewed Charles Murray’s new book Coming Apart?

In the published preview, Murray made a highly qualified claim—the median family income across 14 of the most elite places to live in 1960 rose from $84,000 in 1960 to $163,000 in 2000, after adjusting incomes to reflect today’s purchasing power.

Those cumbersome qualifications take the oomph right out of the claim.  Too long to be a provocative sound bite, the Journal refashioned it into a provocative sight bite.  Wow, those incomes really grew!

But not as much as the graph suggests.  The text states that the median salary just about doubled.  The picture indicates that it quadrupled.  It’s Huff’s moneybag trick—even down to the relative proportion of  salaries!

Here is a comparison of the inaccurate graph with an accurate version I constructed.  The accurate graph is far less provocative.

As a rule, the areas of circles are difficult for people to compare by eye.  In fact, using the area of any two-dimensional shape to represent one-dimensional data is probably a bad idea.  Not only do interpretations vary depending on the shape that is used, but they vary depending on the relative placement of the shapes.

To illustrate these points, here are six alternative representations of Murray’s data.  Which, if any, are lies?

3 Comments

Filed under Design, Visualcy

The African Evaluation Association Conference Comes to a Close (#4)

From Tarek Azzam in Accra, Ghana: I have had the opportunity to attend many conferences in the US, Canada, Europe, and Australia.  All were informative and invigorating in their own way, but the AfrEA conference was different.  The issues facing the African continent are immense.  Yet I was continually uplifted by the determination, skill, and caring of the people working to make a difference.  It reminded me that evaluation can be more than an academic exercise or bureaucratic requirement.  Evaluation can be a fundamental tool for development that carries with it our future aspirations for democracy, equity, and human rights.

This is exemplified by the evaluation efforts of Slum Dwellers International (SDI), an organization in which evaluations are carried out by and for the people living in 35 different slums across the world. SDI mobilizes its members through the practice of evaluation—they collect interviews, surveys, and other forms of data and use that information to directly negotiate with governments to improve the conditions of their communities.  SDI has over 4 million members who have created a culture in which evaluation knowledge is power.

I was intrigued by the World Bank’s evaluation capacity building efforts.  Along with other partnering organizations, they are working on a new initiative to establish evaluation training centers across the African continent and other developing regions. They will be called CLEAR centers (Regional Centers for Learning on Evaluation and Results) and eventually hope to establish them as degree granting programs that offer MAs and perhaps even PhDs in monitoring and evaluation.  There appears to be support for the initiative but it remains to be seen what the final program will look like.

My fellow presenters and I had the opportunity to share the results of our workshop as part of a conference panel. The session was not as well attended as the workshop (approximately 10 people) but the conversations were productive. We discussed the list of evaluator competencies and principles that we generated.  The reaction was positive and we have been given the responsibility of taking the next step.  It feels like a big step.  There tends to be more talk than action in the development community. I don’t want this to fizzle out.  Thankfully, there are workshop participants and presenters who are eager to push the work forward with me.

Now that the conference is over, I have been reflecting on the experience.  More than ever I believe that we, as a field, can have an enormous impact on the governments, institutions, communities, and people dedicated to improving the lives of others.  That is why I got into evaluation, and the last few days have reinforced my commitment to the field.

Thank you for following my blog posts.  And thank you to John Gargani for giving me the opportunity to share my experiences at AfrEA.

5 Comments

Filed under Evaluation