Tag Archives: complexity

Curb Your “Malthusiasm”—How Evaluation Can Contribute to A Sustainable and Equitable Future

malthus_evalblogThe theme of the upcoming 2014 annual conference of the American Evaluation Association (AEA) challenges participants to consider how evaluation can contribute to a sustainable and equitable future. It’s a fantastic challenge, one that cuts to the core of why evaluation matters—its potential to promote the public good locally and globally, today and in the future.

As I prepare my presentations, I want to share some of my thoughts and encourage others to take up the challenge.

The End is Nigh(ish)

The natural and social environments in which we live have limits. Exceed them, and society puts itself at risk.

It’s a simple idea, but one that did not enter the public’s thinking until Thomas Malthus wrote about it in the late 18th century. He famously predicted that, unless something changed, the British population would soon grow too large to feed itself. As it turns out, something did change—among other things, merchants imported food—and the crisis never came to pass.

Today, Malthus is strongly—and unjustly—associated with, as Lauren F. Landsburg put it, “a pessimistic prediction of the lock-step demise of a humanity doomed to starvation via overpopulation.” This jolly point of view is sometimes referred to as Malthusianism, and applied to all forms of catastrophic environmental and social decline.

The underlying concept Malthus articulated—there are real environmental and societal limits, and real consequences for exceeding them—is not controversial. There are, however, controversial perspectives related to it, including:

  • “Malthusiasm”: A passionate belief in—bordering on enthusiasm for—the inevitability of environmental and social collapse, especially in the short term.
  • Denialism: An equally passionate belief that predictions of environmental and social disaster, like those made by Malthus, never come to pass.
  • Self-correctionism: A belief that many small, undirected changes in individual and organizational behavior, related primarily to markets and other social structures, will naturally correct for problems in complex ways that may, at first, be difficult to notice.
  • Intentionalism: A belief that intentional action at the individual, organizational, and social levels—when well planned, executed, and evaluated—can not only help avoid disaster, but produce positive benefits that serve the public good.

I reject the first two. I hope for the third. I’ve spent my life working for the fourth—and this is where evaluation can play a significant role.

From Avoiding Disaster to Promoting Sustainability

I am as much for avoiding disaster as the next guy, but—rightly or wrongly—I expect more from organized human action. Like sustainability. It’s a concept that I and others strongly believe should guide the actions of every organization. It is also a slippery concept that we have not fully defined, making it a rough guide, at best.

So, connecting ideas from various sources (and a few of my own), I’ve developed a preliminary working definition based on a set of underlying principles (in parentheses):

Actions are sustainable when they do not affect future generations adversely (futurity), social groups differentially (equity), larger social and natural systems destructively (globality), or their own objectives negatively (complexity).

I’m not fully satisfied with the definition, but so far it has helped clarify my thinking.

Why Evaluation Matters

Unfortunately, action is only weakly linked to upholding these principles, in part because there is often a lack of information about how well the principles have been (or will be) met.

That is where evaluation comes in. If we use our skills to help design the actions of commercial and social enterprises in ways that uphold these principles, we serve the public good. If we evaluate programs in ways that shed light on these principles—which would require most of us to expand our field of view—we also serve the public good.

This is why evaluation matters—because it has the potential to serve the public good—and why we need to work together to make it matter more. That would truly be evaluation for a sustainable and equitable future.

3 Comments

Filed under AEA Conference, Conference Blog, Evaluation, Program Design, Program Evaluation

Conference Blog: Evaluation 2012 (Part 1)—Complexity

I have a great fondness for the American Evaluation Association and its Annual Conference.  At this year’s conference—Evaluation 2012—roughly 3,000 evaluators from around the world came together to share their work, rekindle old friendships, and establish new ones.  I was pleased and honored to be a part of it.

As I moved from session to session, I would ask those I met my favorite question—What have you learned that you will use in your practice?

Their answers—lists, connections, reflections—were filled with insights and surprises.  They helped me understand the wide range of ideas being discussed at the conference and how those ideas are likely to emerge in practice.

In the spirit of that question, I would like to share some thoughts about a few ideas that were thick in the air, starting with this post on complexity.

Complexity: The Undefined Elephant in the Room

The theme of the conference was Evaluation in Complex Ecologies: Relationships, Responsibilities, Relevance.  Not surprisingly, the concept of complexity received a great deal of attention.

Like many bits of evaluation jargon, it has a variety of legitimate formal and informal definitions.  Consequently, evaluators use the term in different ways at different times, which led a number of presenters to make statements that I found difficult to parse.

Here are a few that I jotted down:

“That’s not complex, it’s complicated.”

“A few simple rules can give rise to tremendous complexity.”

“Complexity can lead to startling simplicity.”

“A system can be simple and complicated at the same time.”

“Complexity can lead to highly stable systems or highly unstable systems.”

“Much of time people use the term complexity wrong.”

We are, indeed, a profession divided by a common language.

Why can’t we agree on a definition for complexity?

First, no other discipline has.  Perhaps that is too strong a statement—small sub-disciplines have developed common understandings of the term, but across those small groups there is little agreement.

Second, we cannot decide if complexity, simplicity, and complicatedness, however defined, are:

(A) Mutually exclusive

(B) Distinct but associated

(C) Inclusive and dependent

(D) All of the above

From what I can tell, the answer is (D).  That doesn’t help much, does it?

Third, we conflate the entities that we label as complex, complicated, or simple.  Over the past week, I heard the term complexity used to describe:

  • real-world structures such as social, environmental, and physical systems;
  • cognitive structures that we use to reason about real-world structures;
  • representations that we use to describe and communicate our cognitive structures;
  • computer models that we use to reveal the behavior of a system that is governed by a mathematically formal interpretation of our representations;
  • behaviors exhibited by real-world structures, cognitive structures, and computer models;
  • strategies that we develop to change the real world in a positive way;
  • human actions undertaken to implement change strategies; and
  • evaluations of our actions and strategies.

When we neglect to specify which entities we are discussing, or treat these entities as interchangeable, clarity is lost.

Where does this get us?

I hope it encourages us to do the following when we invoke the concept of complexity: define what we mean and identify what we are describing.  If we do that, we don’t need to agree—and we will be better understood.

Leave a comment

Filed under AEA Conference, Evaluation, Program Evaluation

Conference Blog: Catapult Labs 2012

Did you miss the Catapult Labs conference on May 19?  Then you missed something extraordinary.

But don’t worry, you can get the recap here.

The event was sponsored by Catapult Design, a nonprofit firm in San Francisco that uses the process and products of design to alleviate poverty in marginalized communities.  Their work spans the worlds of development, mechanical engineering, ethnography, product design, and evaluation.

That is really, really cool.

I find them remarkable and their approach refreshing.  Even more so because they are not alone.  The conference was very well attended by diverse professionals—from government, the nonprofit sector, the for-profit sector, and design—all doing similar work.

The day was divided into three sets of three concurrent sessions, each presented as hands-on labs.  So, sadly, I could attend only one third of what was on offer.  My apologies to those who presented and are not included here.

I started the day by attending Democratizing Design: Co-creating With Your Users presented by Catapult’s Heather Fleming.  It provided an overview of techniques designers use to include stakeholders in the design process.

Evaluators go to great lengths to include stakeholders.  We have broad, well-established approaches such as empowerment evaluation and participatory evaluation.  But the techniques designers use are largely unknown to evaluators.  I believe there is a great deal we can learn from designers in this area.

An example is games.  Heather organized a game in which we used beans as money.  Players chose which crops to plant, each with its own associated cost, risk profile, and potential return.  The expected payoff varied by gender, which was arbitrarily assigned to players.  After a few rounds the problem was clear—higher costs, lower returns, and greater risks for women increased their chances of financial ruin, and this had negative consequences for communities.

I believe that evaluators could put games to good use.  Describing a social problem as a game requires stakeholders to express their cause-and-effect assumptions about the problem.  Playing with a group allows others to understand those assumptions intimately, comment upon them, and offer suggestions about how to solve the problem within the rules of the game (or perhaps change the rules to make the problem solvable).

I have never met a group of people who were more sincere in their pursuit of positive change.  And honest in their struggle to evaluate their impact.  I believe that impact evaluation is an area where evaluators have something valuable to share with designers.

That was the purpose of my workshop Measuring Social Impact: How to Integrate Evaluation & Design.  I presented a number of techniques and tools we use at Gargani + Company to design and evaluate programs.  They are part of a more comprehensive program design approach that Stewart Donaldson and I will be sharing this summer and fall in workshops and publications (details to follow).

The hands-on format of the lab made for a great experience.  I was able to watch participants work through the real-world design problems that I posed.  And I was encouraged by how quickly they were able to use the tools and techniques I presented to find creative solutions.

That made my task of providing feedback on their designs a joy.  We shared a common conceptual framework and were able to speak a common language.  Given the abstract nature of social impact, I was very impressed with that—and their designs—after less than 90 minutes of interaction.

I wrapped up the conference by attending Three Cups, Rosa Parks, and the Polar Bear: Telling Stories that Work presented by Melanie Moore Kubo and Michaela Leslie-Rule from See Change.  They use stories as a vehicle for conducting (primarily) qualitative evaluations.  They call it story science.  A nifty idea.

I liked this session for two reasons.  First, Melanie and Michaela are expressive storytellers, so it was great fun listening to them speak.  Second, they posed a simple question—Is this story true?—that turns out to be amazingly complex.

We summarize, simplify, and translate meaning all the time.  Those of us who undertake (primarily) quantitative evaluations agonize over this because our standards for interpreting evidence are relatively clear but our standards for judging the quality of evidence are not.

For example, imagine that we perform a t-test to estimate a program’s impact.  The t-test indicates that the impact is positive, meaningfully large, and statistically significant.  We know how to interpret this result and what story we should tell—there is strong evidence that the program is effective.

But what if the outcome measure was not well aligned with the program’s activities? Or there were many cases with missing data?  Would our story still be true?  There is little consensus on where to draw the line between truth and fiction when quantitative evidence is flawed.

As Melanie and Michaela pointed out, it is critical that we strive to tell stories that are true, but equally important to understand and communicate our standards for truth.  Amen to that.

The icing on the cake was the conference evaluation.  Perhaps the best conference evaluation I have come across.

Everyone received four post-it notes, each a different color.  As a group, we were given a question to answer on a post-it of a particular color, and only a minute to answer the question.  Immediately afterward, the post-its were collected and displayed for all to view, as one would view art in a gallery.

Evaluation as art—I like that.  Immediate.  Intimate.  Transparent.

Gosh, I like designers.

4 Comments

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Wharton “Creating Lasting Change” Conference

How can corporations promote the greater good?  Can they do good and be profitable?  How well can we measure the good they are doing?

These were some of the questions explored at a recent Wharton School Conference entitled Creating Lasting Change: From Social Entrepreneurship to Sustainability in Retail.  I provide a brief recap of the event.  Then I discuss why I believe program evaluators, program designers, and corporations have a great deal to learn from each other.

The Location

The conference took place at Wharton’s stunning new San Francisco campus.  By stunning I mean drop-dead gorgeous.  Here is one of its many views.

An Unusual and Effective Conference

The conference was jointly organized by three entities within the Wharton School—the Jay H. Baker Retailing Center, the Initiative for Global Environmental Leadership, and the Wharton Program for Social Impact.

When I first read this I scratched my head.  A conference that combined the interests of any two made sense to me.  Combining the interests of all three seemed like a stretch.  I found—much to my delight—that the conference worked very well because of its two-panel structure.

Panel 1 addressed the social and environmental impact of new ventures; Panel 2 addressed the impact of large, established corporations.  This offered an opportunity to compare and contrast new with old, small with large, and risk takers with the risk averse.

Fascinating and enlightening.  I explain why after I describe the panels.

Panel 1: Social Entrepreneurship/Innovation

The first panel considered how entrepreneurs and venture capitalists can promote positive environmental and social change.

  • Andrew D’Souza, Chief Revenue Officer at Top Hat Monocle, discussed how his company developed web-based clickers for classrooms and online homework tools that are designed to promote learning—a social benefit that can be directly monetized.
  • Mike Young, Director of Technology Development at Innova Dynamics, described how his company’s social mission drives their development and commercialization of “disruptive advanced materials technologies for a sustainable future.”
  • Amy Errett, Partner at the venture capital firm Maveron, emphasized the firm’s belief that businesses focusing on a social mission tend to achieve financial success.
  • Susie Lee, Principal at TBL Capital, outlined her firm’s patient capital approach, which favors companies that balance their pursuit of social, environmental, and financial objectives.
  • Raghavan Anand, Chief Financial Officer at One Million Lights, moderated the panel.

Panel 2: Sustainability/CSR in the Retail Industry

The second panel discussed how large, established companies impact society and the natural world, and what it means for a corporation to act responsibly.

Christy Consler, Vice President of Sustainability at Safeway Inc., made the case that the large grocer (roughly 1,700 stores and 180,000 employees) needs to focus on sustainable, socially responsible operations to ensure that it has dependable sources for its product—food—as the world population swells by 2 billion over the next 35 years.

Lori Duvall, Director of Operational Sustainability at eBay Inc., summarized eBay’s sustainability efforts, which include solar power installations, reusable packaging, and community engagement.

Paul Dillinger, Senior Director-Global Design at Levi Strauss & Co., made an excellent presentation on the social and environmental consequences—positive and negative—of the fashion industry, and how the company is working to make a positive impact.

Shauna Sadowski, Director of Sustainability at Annie’s (you know, the company that makes the cute organic, bunny-shaped mac and cheese), discussed how bringing natural foods to the marketplace motivates sustainable, community-centered operations.

Barbara Kahn moderated.  She wins the prize for having the longest title—the Patty & Jay H. Baker Professor, Professor of Marketing; Director, Jay H. Baker Retailing Center—and from what I could tell, she deserves every bit of the title.

Measuring Social Impact

I was thrilled to find corporations, new and old, concerned with making the world a better place.  Business in general, and Wharton in particular, have certainly changed in the 20 years since I earned my MBA.

The unifying theme of the panels was impact.  Inevitably, that discussion turned from how corporations were working to make social and environmental impacts to how they were measuring impacts.  When it did, the word evaluation was largely absent, being replaced by metrics, measures, assessments, and indicators.  Evaluation, as a field and a discipline, appears to be largely unknown to the corporate world.

Echoing what I heard at the Harvard Social Enterprise Conference (day 1 and day 2), impact measurement was characterized as nascent, difficult, and elusive.  Everyone wants to do it; no one knows how.

I find this perplexing.  Is the innovation, operational efficiency, and entrepreneurial spirit of American corporations insufficient to crack the nut of impact measurement?

Without a doubt, measuring impact is difficult—but not for the reasons one might expect.  Perhaps the greatest challenge is defining what one means by impact.  This venerable concept has become a buzzword, signifying both more an less than it should for different people in different settings.  Clarifying what we mean simplifies the task of measurement considerably.  In this setting, two meanings dominated the discussion.

One was the intended benefit of a product or service.  Top Hat Monocle’s products are intended to increase learning.  Annie’s foods are intended to promote health.  Evaluators are familiar with this type of impact and how to measure it.  Difficult?  Yes.  It poses practical and technical challenges, to be sure.  Nascent and elusive?  No.  Evaluators have a wide range of tools and techniques that we use regularly to estimate impacts of this type.

The other dominant meaning was the consequences of operations.  Evaluators are probably less familiar with this type of impact.

Consider Levi’s.  In the past, 42 liters of fresh water were required to produce one pair of Levi’s jeans.  According to Paul Dillinger, the company has since produced about 13 million pairs using a more water-efficient process, reducing the total water required for these jeans from roughly 546 million liters to 374 million liters—an estimated savings of 172 million liters.

Is that a lot?  The Institute of Medicine estimates that one person requires about 1,000 liters of drinking water per year (2.2 to 3 liters per day making a variety of assumptions)—so Levi’s saved enough drinking water for about 172,000 people for one year.  Not bad.

But operational impact is more complex than that.  Levi’s still used the equivalent yearly drinking water for 374,000 people in places where potable water may be in short supply.  The water that was saved cannot be easily moved where it may be needed more for drinking, irrigation, or sanitation.  If the water that is used for the production of jeans is not handled properly, it may contaminate larger supplies of fresh water, resulting in a net loss of potable water.  The availability of more fresh water in a region can change behavior in ways that negate the savings, such as attracting new industries that depend on water or inducing wasteful water consumption practices.

Is it difficult to measure operational impact?  Yes.  Even estimating something as tangible as water use is challenging.  Elusive?  No.  We can produce impact estimates, although they may be rough.  Nascent?  Yes and no.  Measuring operational impact depends on modeling systems, testing assumptions, and gauging human behavior.  Evaluators have a long history of doing these things, although not in combination for the purpose of measuring operational impact.

It seems to me that evaluators and corporations could learn a great deal from each other.  It is a shame these two worlds are so widely separated.

Designing Corporate Social Responsibility Programs

With all the attention given to estimating the value of corporate social responsibility programs, the values underlying them were not fully explored.  Yet the varied and often conflicting values of shareholders and stakeholders pose the most significant challenge facing those designing these programs.

Why do I say that?  Because it has been that way for over 100 years.

The concept of corporate social responsibility has deep roots.  In 1909, William Tolman wrote about a trend he observed in manufacturing.  Many industrialists, by his estimation, were taking steps to improve the working conditions, pay, health, and communities of their employees.  He noted that these unprompted actions had various motives—a feeling that workers were owed the improvements, unqualified altruism, or the belief that the efforts would lead to greater profits.

Tolman placed a great deal of faith in the last motive.  Too much faith.  Twentieth-century industrial development was not characterized by rational, profit-maximizing companies competing to improve the lot of stakeholders in order to increase the wealth of shareholders.  On the contrary, making the world a better place typically entailed tradeoffs that shareholders found unacceptable.

So these early efforts failed.  The primary reason was that their designs did not align the values of shareholders and stakeholders.

Can the values of shareholders and stakeholders be more closely aligned today?  I believe they can be.  The founders of many new ventures, like Top Hat Monocle and Innova Dynamics, bring different values to their enterprises.  For them, Tolman’s nobler motives—believing that people deserve a better life and a desire to do something decent in the world—are the cornerstones of their company cultures.  Even in more established organizations—Safeway and Levi’s—there appears to be a cultural shift taking place.  And many venture capital firms are willing to take a patient capital approach, waiting longer and accepting lower returns, if it means they can promote a greater social good.

This is change for the better.  But I wonder if we, like Tolman, are putting too much faith in win-win scenarios in which we imagine shareholders profit and stakeholders benefit.

It is tempting to conclude that corporate social responsibility programs are win-win.  The most visible examples, like those presented at this conference, are.  What lies outside of our field of view, however, are the majority of rational, profit-seeking corporations that are not adopting similar programs.  Are we to conclude that these enterprises are not as rational as they should be? Or have we yet to design corporate responsibility programs that resolve the shareholder-stakeholder tradeoffs that most companies face?

Again, there seems to be a great deal that program designers, who are experienced at balancing competing values, and corporations can learn from each other…if only the two worlds met.

1 Comment

Filed under Commentary, Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Running Hot and Cold for Mixed Methods: Jargon, Jongar, and Code

Jargon is the name we give to big labels placed on little ideas. What should we call little labels placed on big ideas? Jongar, of course.

A good example of jongar in evaluation is the term mixed methods. I run hot and cold for mixed methods. I praise them in one breath and question them in the next, confusing those around me.

Why? Because mixed methods is jongar.

Recently, I received a number of comments through LinkedIn about my last post. A bewildered reader asked how I could write that almost every evaluation can claim to use a mixed-methods approach. It’s true, I believe that almost every evaluation can claim to be a mixed-methods evaluation, but I don’t believe that many—perhaps most—should.

Why? Because mixed methods is also jargon.

Confused? So were Abbas Tashakkori and John Creswell. In 2007, they put together a very nice editorial for the first issue of the Journal of Mixed Methods Research. In it, they discussed the difficulty they faced as editors who needed to define the term mixed methods. They wrote:

…we found it necessary to distinguish between mixed methods as a collection and analysis of two types of data (qualitative and quantitative) and mixed methods as the integration of two approaches to research (quantitative and qualitative).

By the first definition, mixed methods is jargon—almost every evaluation uses more than one type of data, so the definition attaches a special label to a trivial idea. This is the view that I expressed in my previous post.

By the second definition, which is closer to my own perspective, mixed methods is jongar—two simple words struggling to convey a complex concept.

My interpretation of the second definition is as follows:

A mixed-methods evaluation is one that establishes in advance a design that explicitly lays out a thoughtful, strategic integration of qualitative and quantitative methods to accomplish a critical purpose that either qualitative or quantitative methods alone could not.

Although I like this interpretation, it places a burden on the adjective mixed that it cannot support. In doing so, my interpretation trades one old problem—being able to distinguish mixed methods evaluations from other types of evaluation—for a number of new problems. Here are three of them:

  • Evaluators often amend their evaluation designs in response to unanticipated or dynamic circumstances—so what does it mean to establish a design in advance?
  • Integration is more than having quantitative and qualitative components in a study design—how much more and in what ways?
  • A mixed-methods design should be introduced when it provides a benefit that would not be realized otherwise—how do we establish the counterfactual?

These complex ideas are lurking behind simple words. That’s why the words are jongar and why the ideas they represent may be ignored.

Technical terms—especially jargon and jongar—can also be code. Code is the use of technical terms in real-world settings to convey a subtle, non-technical message, especially a controversial message.

For example, I have found that in practice funders and clients often propose mixed methods evaluations to signal—in code—that they seek an ideological compromise between qualitative and quantitative perspectives. This is common when program insiders put greater faith in qualitative methods and outsiders put greater faith in quantitative methods.

When this is the case, I believe that mixed methods provide an illusory compromise between imagined perspectives.

The compromise is illusory because mixed methods are not a middle ground between qualitative and quantitative methods, but a new method that emerges from the integration of the two. At least by the second definition of mixed methods that I prefer.

The perspectives are imagined because they concern how results based on particular methods may be incorrectly perceived or improperly used by others in the future. Rather than leap to a mixed-methods design, evaluators should discuss these imagined concerns with stakeholders in advance to determine how to best accommodate them—with or without mixed methods. In many funder-grantee-evaluator relationships, however, this sort of open dialogue may not be possible.

This is why I run hot and cold for mixed methods. I value them. I use them. Yet, I remain wary of labeling my work as such because the label can be…

  • jargon, in which case it communicates nothing;
  • jongar, in which case it cannot communicate enough; or
  • code, in which case it attempts to communicate through subtlety what should be communicated through open dialogue.

Too bad—the ideas underlying mixed methods are incredibly useful.

6 Comments

Filed under Commentary, Evaluation, Evaluation Quality, Program Evaluation, Research

Toward a Taxonomy of Wicked Problems

Program designers and evaluators have become keenly interested in wicked problems.  More precisely, we are witnessing a second wave of interest—one that holds new promise for the design of social, educational, environmental, and cultural programs.

The concept of wicked problems was first introduced in the late 1960s by Horst Rittel, then at UC Berkeley.  It became a popular subject for authors in many disciplines, and writing on the subject grew through the 1970s and into the early 1980s (the first wave).  At that point, writing on the subject slowed until the late 1990s when the popularity of the subject again grew (the second wave).

Here are the results of a Google ngram analysis that illustrates the two waves of interest (click the image to enlarge).

Rittel contrasted wicked problems with tame problems.  Various authors, including Rittel, have described the tame-wicked dichotomy in different ways.  Most are based on the 10 characteristics of wicked problems that Rittel introduced in the early 1970s.  Briefly…

Tame problems can be solved in isolation by an expert—the problems are relatively easy to define, the range of possible solutions can be fully enumerated in advance, stakeholders hold shared values related to the problems and possible solutions, and techniques exist to solve the problems as well as measure the success of implemented solutions.

Wicked problems are better addressed collectively by diverse groups—the problems are difficult to define, few if any possible solutions are known in advance, stakeholders disagree about underlying values, and we can neither solve the problems (in the sense that they can be eliminated) nor measure the success of implemented solutions.

In much of the writing that emerged during the first wave of interest, the tame-wicked dichotomy was the central theme.  It was argued that most problems of interest to policymakers are wicked, which limited the utility of the rational, quantitative, stepwise thinking that dominated policy planning, operations research, and management science at the time.  A new sort of thinking was needed.

In the writing that has emerged in the second wave, that new sort of thinking has been given many names—systems thinking, design thinking, complexity thinking, and developmental thinking, to name a few.  Each, supposedly, can tame what would otherwise be wicked.

Perhaps.

The arguments for “better ways of thinking” are weakened by the assumption that wicked and tame represent a dichotomy.  If most social problems met all 10 of Rittel’s criteria, we would be doomed.  We aren’t.

Social problems are more or less wicked, each in its own way.  Understanding how a problem is wicked, I believe, is what will enable us to think more effectively about social problems and to tame them more completely.

Consider two superficially similar examples that are wicked in different ways.

Contagious disease: We understand the biological mechanisms that would allow us to put an end to many contagious diseases.  In this sense, these diseases are tame problems.  However, we have not been able to eradicate all contagious diseases that we understand well.  The reason, in part, is that many people hold values that conflict with solutions that are, on a biological level, known to be effective.  For example, popular fear of vaccines may undermine the effectiveness of mass vaccination, or the behavioral changes needed to reduce infection rates may clash with local cultures.  In cases such as this, contagious diseases pose wicked problems because of conflicting values.  The design of programs to eradicate these diseases would need to take this source of wickedness into account, perhaps by including strong stakeholder engagement efforts or public education campaigns.

Cancer: We do not fully understand the biological mechanisms that would allow us to prevent and cure many forms of cancer.  At the same time, the behaviors that might reduce the risk of these cancers (such as healthy diet, regular exercise, not smoking, and avoiding exposure to certain chemicals) conflict with values that many people hold (such as the importance of personal freedom, desire for comfort and convenience, and the need to earn a living in certain industrial settings). In these cases, cancer poses wicked problems for two reasons—our lack of understanding and conflicting values.  This may or may not make it “more” wicked than eradicating well-understood contagious diseases; that is difficult to assess.  But it certainly makes it wicked in a different way, and the design of programs to end cancer would need to take that difference into account and address both sources of wickedness.

The two examples above are wicked problems, but they are wicked for different reasons.  Those reasons have important implications for program designers.  My interest over the next few months is to flesh out a more comprehensive taxonomy of wickedness and to unpack its design implications.  Stay tuned.

5 Comments

Filed under Design, Program Design

It’s a Gift to Be Simple

 simple_logic

Theory-based evaluation acknowledges that, intentionally or not, all programs depend on the beliefs influential stakeholders have about the causes and consequences of effective social action. These beliefs are what we call theories, and they guide us when we design, implement, and evaluate programs.

Theories live (imperfectly) in our minds. When we want to clarify them for ourselves or communicate them to others, we represent them as some combination of words and pictures. A popular representation is the ubiquitous logic model, which typically takes the form of box-and-arrow diagrams or relational matrices.

The common wisdom is that developing a logic model helps program staff and evaluators develop a better understanding of a program, which in turn leads to more effective action.

Not to put too fine a point on it, this last statement is a representation of a theory of logic models. I represented the theory with words, which have their limits, yet another form of representation might reveal, hide, or distort different aspects of the theory. In this case, my theory is simple and my representation is simple, so you quickly get the gist of my meaning. Simplicity has its virtues.

It also has its perils. A chief criticism of logic models is that they fail to promote effective action because they are vastly too simple to represent the complexity inherent in a program, its participants, or its social value. This criticism has become more vigorous over time and deserves attention. In considering it, however, I find myself drawn to the other side of the argument, not because I am especially wedded to logic models, but rather to defend the virtues of simplicity. Continue reading

Leave a comment

Filed under Commentary, Evaluation, Program Evaluation