Tag Archives: policy development

Conference Blog: Catapult Labs 2012

Did you miss the Catapult Labs conference on May 19?  Then you missed something extraordinary.

But don’t worry, you can get the recap here.

The event was sponsored by Catapult Design, a nonprofit firm in San Francisco that uses the process and products of design to alleviate poverty in marginalized communities.  Their work spans the worlds of development, mechanical engineering, ethnography, product design, and evaluation.

That is really, really cool.

I find them remarkable and their approach refreshing.  Even more so because they are not alone.  The conference was very well attended by diverse professionals—from government, the nonprofit sector, the for-profit sector, and design—all doing similar work.

The day was divided into three sets of three concurrent sessions, each presented as hands-on labs.  So, sadly, I could attend only one third of what was on offer.  My apologies to those who presented and are not included here.

I started the day by attending Democratizing Design: Co-creating With Your Users presented by Catapult’s Heather Fleming.  It provided an overview of techniques designers use to include stakeholders in the design process.

Evaluators go to great lengths to include stakeholders.  We have broad, well-established approaches such as empowerment evaluation and participatory evaluation.  But the techniques designers use are largely unknown to evaluators.  I believe there is a great deal we can learn from designers in this area.

An example is games.  Heather organized a game in which we used beans as money.  Players chose which crops to plant, each with its own associated cost, risk profile, and potential return.  The expected payoff varied by gender, which was arbitrarily assigned to players.  After a few rounds the problem was clear—higher costs, lower returns, and greater risks for women increased their chances of financial ruin, and this had negative consequences for communities.

I believe that evaluators could put games to good use.  Describing a social problem as a game requires stakeholders to express their cause-and-effect assumptions about the problem.  Playing with a group allows others to understand those assumptions intimately, comment upon them, and offer suggestions about how to solve the problem within the rules of the game (or perhaps change the rules to make the problem solvable).

I have never met a group of people who were more sincere in their pursuit of positive change.  And honest in their struggle to evaluate their impact.  I believe that impact evaluation is an area where evaluators have something valuable to share with designers.

That was the purpose of my workshop Measuring Social Impact: How to Integrate Evaluation & Design.  I presented a number of techniques and tools we use at Gargani + Company to design and evaluate programs.  They are part of a more comprehensive program design approach that Stewart Donaldson and I will be sharing this summer and fall in workshops and publications (details to follow).

The hands-on format of the lab made for a great experience.  I was able to watch participants work through the real-world design problems that I posed.  And I was encouraged by how quickly they were able to use the tools and techniques I presented to find creative solutions.

That made my task of providing feedback on their designs a joy.  We shared a common conceptual framework and were able to speak a common language.  Given the abstract nature of social impact, I was very impressed with that—and their designs—after less than 90 minutes of interaction.

I wrapped up the conference by attending Three Cups, Rosa Parks, and the Polar Bear: Telling Stories that Work presented by Melanie Moore Kubo and Michaela Leslie-Rule from See Change.  They use stories as a vehicle for conducting (primarily) qualitative evaluations.  They call it story science.  A nifty idea.

I liked this session for two reasons.  First, Melanie and Michaela are expressive storytellers, so it was great fun listening to them speak.  Second, they posed a simple question—Is this story true?—that turns out to be amazingly complex.

We summarize, simplify, and translate meaning all the time.  Those of us who undertake (primarily) quantitative evaluations agonize over this because our standards for interpreting evidence are relatively clear but our standards for judging the quality of evidence are not.

For example, imagine that we perform a t-test to estimate a program’s impact.  The t-test indicates that the impact is positive, meaningfully large, and statistically significant.  We know how to interpret this result and what story we should tell—there is strong evidence that the program is effective.

But what if the outcome measure was not well aligned with the program’s activities? Or there were many cases with missing data?  Would our story still be true?  There is little consensus on where to draw the line between truth and fiction when quantitative evidence is flawed.

As Melanie and Michaela pointed out, it is critical that we strive to tell stories that are true, but equally important to understand and communicate our standards for truth.  Amen to that.

The icing on the cake was the conference evaluation.  Perhaps the best conference evaluation I have come across.

Everyone received four post-it notes, each a different color.  As a group, we were given a question to answer on a post-it of a particular color, and only a minute to answer the question.  Immediately afterward, the post-its were collected and displayed for all to view, as one would view art in a gallery.

Evaluation as art—I like that.  Immediate.  Intimate.  Transparent.

Gosh, I like designers.

4 Comments

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Toward a Taxonomy of Wicked Problems

Program designers and evaluators have become keenly interested in wicked problems.  More precisely, we are witnessing a second wave of interest—one that holds new promise for the design of social, educational, environmental, and cultural programs.

The concept of wicked problems was first introduced in the late 1960s by Horst Rittel, then at UC Berkeley.  It became a popular subject for authors in many disciplines, and writing on the subject grew through the 1970s and into the early 1980s (the first wave).  At that point, writing on the subject slowed until the late 1990s when the popularity of the subject again grew (the second wave).

Here are the results of a Google ngram analysis that illustrates the two waves of interest (click the image to enlarge).

Rittel contrasted wicked problems with tame problems.  Various authors, including Rittel, have described the tame-wicked dichotomy in different ways.  Most are based on the 10 characteristics of wicked problems that Rittel introduced in the early 1970s.  Briefly…

Tame problems can be solved in isolation by an expert—the problems are relatively easy to define, the range of possible solutions can be fully enumerated in advance, stakeholders hold shared values related to the problems and possible solutions, and techniques exist to solve the problems as well as measure the success of implemented solutions.

Wicked problems are better addressed collectively by diverse groups—the problems are difficult to define, few if any possible solutions are known in advance, stakeholders disagree about underlying values, and we can neither solve the problems (in the sense that they can be eliminated) nor measure the success of implemented solutions.

In much of the writing that emerged during the first wave of interest, the tame-wicked dichotomy was the central theme.  It was argued that most problems of interest to policymakers are wicked, which limited the utility of the rational, quantitative, stepwise thinking that dominated policy planning, operations research, and management science at the time.  A new sort of thinking was needed.

In the writing that has emerged in the second wave, that new sort of thinking has been given many names—systems thinking, design thinking, complexity thinking, and developmental thinking, to name a few.  Each, supposedly, can tame what would otherwise be wicked.

Perhaps.

The arguments for “better ways of thinking” are weakened by the assumption that wicked and tame represent a dichotomy.  If most social problems met all 10 of Rittel’s criteria, we would be doomed.  We aren’t.

Social problems are more or less wicked, each in its own way.  Understanding how a problem is wicked, I believe, is what will enable us to think more effectively about social problems and to tame them more completely.

Consider two superficially similar examples that are wicked in different ways.

Contagious disease: We understand the biological mechanisms that would allow us to put an end to many contagious diseases.  In this sense, these diseases are tame problems.  However, we have not been able to eradicate all contagious diseases that we understand well.  The reason, in part, is that many people hold values that conflict with solutions that are, on a biological level, known to be effective.  For example, popular fear of vaccines may undermine the effectiveness of mass vaccination, or the behavioral changes needed to reduce infection rates may clash with local cultures.  In cases such as this, contagious diseases pose wicked problems because of conflicting values.  The design of programs to eradicate these diseases would need to take this source of wickedness into account, perhaps by including strong stakeholder engagement efforts or public education campaigns.

Cancer: We do not fully understand the biological mechanisms that would allow us to prevent and cure many forms of cancer.  At the same time, the behaviors that might reduce the risk of these cancers (such as healthy diet, regular exercise, not smoking, and avoiding exposure to certain chemicals) conflict with values that many people hold (such as the importance of personal freedom, desire for comfort and convenience, and the need to earn a living in certain industrial settings). In these cases, cancer poses wicked problems for two reasons—our lack of understanding and conflicting values.  This may or may not make it “more” wicked than eradicating well-understood contagious diseases; that is difficult to assess.  But it certainly makes it wicked in a different way, and the design of programs to end cancer would need to take that difference into account and address both sources of wickedness.

The two examples above are wicked problems, but they are wicked for different reasons.  Those reasons have important implications for program designers.  My interest over the next few months is to flesh out a more comprehensive taxonomy of wickedness and to unpack its design implications.  Stay tuned.

5 Comments

Filed under Design, Program Design

Data-Free Evaluation

 curves

George Bernard Shaw quipped, “If all economists were laid end to end, they would not reach a conclusion.”  However, economists should not be singled out on this account — there is an equal share of controversy awaiting anyone who uses theories to solve social problems.  While there is a great deal of theory-based research in the social sciences, it tends to be more theory than research, and with the universe of ideas dwarfing the available body of empirical evidence, there tends to be little if any agreement on how to achieve practical results.  This was summed up well by another master of the quip, Mark Twain, who observed that the fascinating thing about science is how “one gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Recently, economists have been in the hot seat because of the stimulus package.  However, it is the policymakers who depended on economic advice who are sweating because they were the ones who engaged in what I like to call data-free evaluation.  This is the awkward art of judging the merit of untried or untested programs. Whether it takes the form of a president staunching an unprecedented financial crisis, funding agencies reviewing proposals for new initiatives, or individuals deciding whether to avail themselves of unfamiliar services, data-free evaluation is more the rule than the exception in the world of policies and programs. Continue reading

3 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research

Conflicts as Conflicting Theories of the World

nyt_2009_01_25

Theories are like bellybuttons-everybody has one and all are surprisingly different.  Last Sunday Scott Atran and Jeremy Ginges wrote an opinion piece for the New York Times in which they described their research on beliefs about conflict and peace in the Middle East.  In brief, they argued that what many outsiders consider rational and logical solutions to the Israeli-Palestinian conflict, insiders consider irrational and illogical.  The reason has largely to do with sacred beliefs.  In spite of the name, these are not religious beliefs, per se, but rather any deeply held beliefs that sit at the core of our world views and are highly resistant to change.

In an earlier post I described beliefs in general as a pile of pick-up sticks, with the most resistant to change-the sacred beliefs-at the bottom of the pile.  Accordingly, altering sacred beliefs in any significant way will disturb all the rest.  At best this is exhausting, at worst traumatic.

Given the variety of beliefs that abound regarding social problems and solutions, it seems that program designers and policymakers are always treading upon someone’s sacred beliefs.  One of the practical questions we have been wrestling with is how to help groups of people with disparate world views reach consensus about programs and policies.  With the approach that we have been developing, we engage a broad range of stakeholders in a simple, iterative process in which they reveal what they believe and why.

Leave a comment

Filed under Commentary, Design, Program Design