Tag Archives: empowerment

Conference Blog: Catapult Labs 2012

Did you miss the Catapult Labs conference on May 19?  Then you missed something extraordinary.

But don’t worry, you can get the recap here.

The event was sponsored by Catapult Design, a nonprofit firm in San Francisco that uses the process and products of design to alleviate poverty in marginalized communities.  Their work spans the worlds of development, mechanical engineering, ethnography, product design, and evaluation.

That is really, really cool.

I find them remarkable and their approach refreshing.  Even more so because they are not alone.  The conference was very well attended by diverse professionals—from government, the nonprofit sector, the for-profit sector, and design—all doing similar work.

The day was divided into three sets of three concurrent sessions, each presented as hands-on labs.  So, sadly, I could attend only one third of what was on offer.  My apologies to those who presented and are not included here.

I started the day by attending Democratizing Design: Co-creating With Your Users presented by Catapult’s Heather Fleming.  It provided an overview of techniques designers use to include stakeholders in the design process.

Evaluators go to great lengths to include stakeholders.  We have broad, well-established approaches such as empowerment evaluation and participatory evaluation.  But the techniques designers use are largely unknown to evaluators.  I believe there is a great deal we can learn from designers in this area.

An example is games.  Heather organized a game in which we used beans as money.  Players chose which crops to plant, each with its own associated cost, risk profile, and potential return.  The expected payoff varied by gender, which was arbitrarily assigned to players.  After a few rounds the problem was clear—higher costs, lower returns, and greater risks for women increased their chances of financial ruin, and this had negative consequences for communities.

I believe that evaluators could put games to good use.  Describing a social problem as a game requires stakeholders to express their cause-and-effect assumptions about the problem.  Playing with a group allows others to understand those assumptions intimately, comment upon them, and offer suggestions about how to solve the problem within the rules of the game (or perhaps change the rules to make the problem solvable).

I have never met a group of people who were more sincere in their pursuit of positive change.  And honest in their struggle to evaluate their impact.  I believe that impact evaluation is an area where evaluators have something valuable to share with designers.

That was the purpose of my workshop Measuring Social Impact: How to Integrate Evaluation & Design.  I presented a number of techniques and tools we use at Gargani + Company to design and evaluate programs.  They are part of a more comprehensive program design approach that Stewart Donaldson and I will be sharing this summer and fall in workshops and publications (details to follow).

The hands-on format of the lab made for a great experience.  I was able to watch participants work through the real-world design problems that I posed.  And I was encouraged by how quickly they were able to use the tools and techniques I presented to find creative solutions.

That made my task of providing feedback on their designs a joy.  We shared a common conceptual framework and were able to speak a common language.  Given the abstract nature of social impact, I was very impressed with that—and their designs—after less than 90 minutes of interaction.

I wrapped up the conference by attending Three Cups, Rosa Parks, and the Polar Bear: Telling Stories that Work presented by Melanie Moore Kubo and Michaela Leslie-Rule from See Change.  They use stories as a vehicle for conducting (primarily) qualitative evaluations.  They call it story science.  A nifty idea.

I liked this session for two reasons.  First, Melanie and Michaela are expressive storytellers, so it was great fun listening to them speak.  Second, they posed a simple question—Is this story true?—that turns out to be amazingly complex.

We summarize, simplify, and translate meaning all the time.  Those of us who undertake (primarily) quantitative evaluations agonize over this because our standards for interpreting evidence are relatively clear but our standards for judging the quality of evidence are not.

For example, imagine that we perform a t-test to estimate a program’s impact.  The t-test indicates that the impact is positive, meaningfully large, and statistically significant.  We know how to interpret this result and what story we should tell—there is strong evidence that the program is effective.

But what if the outcome measure was not well aligned with the program’s activities? Or there were many cases with missing data?  Would our story still be true?  There is little consensus on where to draw the line between truth and fiction when quantitative evidence is flawed.

As Melanie and Michaela pointed out, it is critical that we strive to tell stories that are true, but equally important to understand and communicate our standards for truth.  Amen to that.

The icing on the cake was the conference evaluation.  Perhaps the best conference evaluation I have come across.

Everyone received four post-it notes, each a different color.  As a group, we were given a question to answer on a post-it of a particular color, and only a minute to answer the question.  Immediately afterward, the post-its were collected and displayed for all to view, as one would view art in a gallery.

Evaluation as art—I like that.  Immediate.  Intimate.  Transparent.

Gosh, I like designers.

4 Comments

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Evaluation Capacity Building at the African Evaluation Association Conference (#3)

From Tarek Azzam in Accra, Ghana: Yesterday was the first day of the AfrEA Conference and it was busy.  I, along with a group of colleagues, presented a workshop on developing evaluation capacity.  It was well attended—almost 60 people—and the discussion was truly inspiring.  Much of our conversation related to how development programs are typically evaluated by experts who are not only external to the organization, but external to the country.  Out-of-country evaluators typically know a great deal about evaluation, and often they do a fantastic job, but their cultural competencies vary tremendously, severely limiting the utility of their work.  When out-of-country evaluators complete their evaluations, they return home and their evaluation expertise leaves with them.  Our workshop participants said they wanted to build evaluation capacity in Africa for Africans because it was the best way to strengthen evaluations and programs.  So we facilitated a discussion of how to make that happen.

At first, the discussion was limited to what participants believed were the deficits of local African evaluators.  This continued until one attendee stood up and passionately described what local evaluators bring to an evaluation that is unique and advantageous.   Suddenly, the entire conversation turned around and participants began discussing how a deep understanding of local contexts, governmental systems, and history improves every step of the evaluation process, from the feasibility of designs to the use of results.  This placed the deficiencies of local evaluators listed previously—most of which were technical—in crisp perspective.  You can greatly advance your understanding of quantitative methods in a few months; you cannot expect to build a deep understanding of a place and its people in the same time.

The next step is to bring the conversation we had in the workshop to the wider AfrEA Conference.  I will begin that process in a panel discussion that takes place later today. My objective is to use the panel to develop a list of strategic principles that can guide future evaluation capacity building efforts. If the principles reflect the values, strengths, and knowledge of those who want to develop their capacity, then the principles can be used to design meaningful capacity building efforts.  It should be interesting—I will keep you posted.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

What the Hell is Quality?

In Zen and the Art of Motorcycle Maintenance, an exasperated Robert Pirsig famously asked, “What the hell is quality?” and expended a great deal of energy trying  to work out an answer.  As I find myself considering the meaning of quality evaluation, the theme of the upcoming 2010 Conference of the American Evaluation Association, it feels like déjà vu all over again.  There are countless definitions of quality floating about (for a short list see Garvin, (1984)), but arguably few if any examples of the concept being applied to modern evaluation practice.  So what the hell is quality evaluation?  And will I need to work out an answer for myself?

Luckily there is some agreement out there.  Quality is usually thought of as an amalgam of multiple criteria, and quality is judged by comparing the characteristics of an actual product or service to those criteria.

Isn’t this exactly what evaluators are trained to do?

Yes.  And judging quality in this way poses some practical problems that will be familiar to evaluators:

Who devises the criteria?
Evaluations serve many often competing interests.  Funders, clients, direct stakeholders, and professional peers make the short list.  All have something to say about what makes an evaluation high quality, but they do not have equal clout.  Some are influential because they have market power (they pay for evaluation services).  Others are influential because they have standing in the profession (they are considered experts or thought leaders).  And as the table below illustrates, some are influential because they have both (funders) and others lack influence because they have neither (direct stakeholders).  More on this in a future blog.

Who makes the comparison?
Quality criteria may be devised by one group and then used by another to judge quality.  For example, funders may establish criteria and then hire independent evaluators (professional peers) who use the criteria to judge the quality of evaluations.  This is what happens when evaluation proposals are reviewed and ongoing evaluations are monitored.  More on this in a future blog.

How is the comparison made?
Comparisons can be made in any number of ways, but (imperfectly) we can lump them into two approaches—the explicit, cerebral, and systemic approach, and the implicit, intuitive, and inconsistent approach.  Individuals tend to judge quality in the latter fashion.  It is not a bad way to go about things, especially when considering everyday purchases (a pair of sneakers or a tuna fish sandwich).  When considering evaluation, however, it would seem best to judge quality in the former fashion.  But is it?  More on this in a future blog.

So what the hell is quality?  This is where I propose an answer that I hope is simple yet covers most of the relevant issues facing our profession.  Quality evaluation is comprised of three distinct things—all important separately, but only in combination reflecting quality.  They are:

Standards
When the criteria used to judge quality come from those with professional standing, the criteria describe an evaluation that meets professional standards.  Standards focus on technical and nontechnical attributes of an evaluation that are under the direct control of the evaluator.  Perhaps the two best examples of this are the Program Evaluation Standards and the Program Evaluations Metaevaluation Checklist.

Satisfaction
When the criteria used to judge quality come from those with market power, the criteria describe an evaluation that would satisfy paying customers.  Satisfaction focuses on whether expectations—reasonable or unreasonable, documented in a contract or not—are met by the evaluator.  Collectively, these expectations define the demand for evaluation in the marketplace.

Empowerment
When the criteria used to judge quality come from direct stakeholders with neither professional standing nor market power, the criteria change the power dynamic of the evaluation.  Empowerment evaluation and participatory evaluation are perhaps the two best examples of evaluation approaches that look to those served by programs to help define a quality evaluation.

Standards, satisfaction, and empowerment are related, but they are not interchangeable.  One can be dissatisfied with an evaluation that exceeds professional standards, or empowered by an evaluation with which funders were not satisfied.  I will argue that the quality of an evaluation should be measured against all three sets of criteria.  Is that feasible?  Desirable?  That is what I will hash out over the next few weeks. 

5 Comments

Filed under Evaluation, Evaluation Quality