Tag Archives: Design

Conference Blog: Catapult Labs 2012

Did you miss the Catapult Labs conference on May 19?  Then you missed something extraordinary.

But don’t worry, you can get the recap here.

The event was sponsored by Catapult Design, a nonprofit firm in San Francisco that uses the process and products of design to alleviate poverty in marginalized communities.  Their work spans the worlds of development, mechanical engineering, ethnography, product design, and evaluation.

That is really, really cool.

I find them remarkable and their approach refreshing.  Even more so because they are not alone.  The conference was very well attended by diverse professionals—from government, the nonprofit sector, the for-profit sector, and design—all doing similar work.

The day was divided into three sets of three concurrent sessions, each presented as hands-on labs.  So, sadly, I could attend only one third of what was on offer.  My apologies to those who presented and are not included here.

I started the day by attending Democratizing Design: Co-creating With Your Users presented by Catapult’s Heather Fleming.  It provided an overview of techniques designers use to include stakeholders in the design process.

Evaluators go to great lengths to include stakeholders.  We have broad, well-established approaches such as empowerment evaluation and participatory evaluation.  But the techniques designers use are largely unknown to evaluators.  I believe there is a great deal we can learn from designers in this area.

An example is games.  Heather organized a game in which we used beans as money.  Players chose which crops to plant, each with its own associated cost, risk profile, and potential return.  The expected payoff varied by gender, which was arbitrarily assigned to players.  After a few rounds the problem was clear—higher costs, lower returns, and greater risks for women increased their chances of financial ruin, and this had negative consequences for communities.

I believe that evaluators could put games to good use.  Describing a social problem as a game requires stakeholders to express their cause-and-effect assumptions about the problem.  Playing with a group allows others to understand those assumptions intimately, comment upon them, and offer suggestions about how to solve the problem within the rules of the game (or perhaps change the rules to make the problem solvable).

I have never met a group of people who were more sincere in their pursuit of positive change.  And honest in their struggle to evaluate their impact.  I believe that impact evaluation is an area where evaluators have something valuable to share with designers.

That was the purpose of my workshop Measuring Social Impact: How to Integrate Evaluation & Design.  I presented a number of techniques and tools we use at Gargani + Company to design and evaluate programs.  They are part of a more comprehensive program design approach that Stewart Donaldson and I will be sharing this summer and fall in workshops and publications (details to follow).

The hands-on format of the lab made for a great experience.  I was able to watch participants work through the real-world design problems that I posed.  And I was encouraged by how quickly they were able to use the tools and techniques I presented to find creative solutions.

That made my task of providing feedback on their designs a joy.  We shared a common conceptual framework and were able to speak a common language.  Given the abstract nature of social impact, I was very impressed with that—and their designs—after less than 90 minutes of interaction.

I wrapped up the conference by attending Three Cups, Rosa Parks, and the Polar Bear: Telling Stories that Work presented by Melanie Moore Kubo and Michaela Leslie-Rule from See Change.  They use stories as a vehicle for conducting (primarily) qualitative evaluations.  They call it story science.  A nifty idea.

I liked this session for two reasons.  First, Melanie and Michaela are expressive storytellers, so it was great fun listening to them speak.  Second, they posed a simple question—Is this story true?—that turns out to be amazingly complex.

We summarize, simplify, and translate meaning all the time.  Those of us who undertake (primarily) quantitative evaluations agonize over this because our standards for interpreting evidence are relatively clear but our standards for judging the quality of evidence are not.

For example, imagine that we perform a t-test to estimate a program’s impact.  The t-test indicates that the impact is positive, meaningfully large, and statistically significant.  We know how to interpret this result and what story we should tell—there is strong evidence that the program is effective.

But what if the outcome measure was not well aligned with the program’s activities? Or there were many cases with missing data?  Would our story still be true?  There is little consensus on where to draw the line between truth and fiction when quantitative evidence is flawed.

As Melanie and Michaela pointed out, it is critical that we strive to tell stories that are true, but equally important to understand and communicate our standards for truth.  Amen to that.

The icing on the cake was the conference evaluation.  Perhaps the best conference evaluation I have come across.

Everyone received four post-it notes, each a different color.  As a group, we were given a question to answer on a post-it of a particular color, and only a minute to answer the question.  Immediately afterward, the post-its were collected and displayed for all to view, as one would view art in a gallery.

Evaluation as art—I like that.  Immediate.  Intimate.  Transparent.

Gosh, I like designers.

4 Comments

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Measuring Impact: Integrating Evaluation & Design (Workshop May 19 in SF)

Interested in design for social change?  Curious about how to measure the social impact of your designs?  Check out my upcoming San Francisco workshop—Measuring Impact: Integrating Evaluation & Design–taking place on May 19 as part of CatapultLabs: Design Tools to Spark Social Change.

Come join in a day of hands-on labs with leading designers and organizations promoting social change.

Learn more about it at http://catapultlabs-2012.eventbrite.com/–space is limited.

Leave a comment

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Harvard Social Enterprise Conference (Day 2)

What follows is a second series of short posts written while I attended the Social Enterprise Conference (#SECON12).  The conference (February 25-26) was presented by the Harvard Business School and the Harvard Kennedy School.

I spent much of the day attending a session entitled ActionStorm: A Workshop on Designing Actionable InnovationsSuzi Sosa, Executive Director of the Dell Social Innovation Challenge did a great job of introducing a design thinking process for those developing new social enterprises.  I plan to blog more about design thinking in a future post.

The approach presented in the workshop combined basic design thinking activities (mind mapping, logic modeling, and empathy mapping) that I believe can be of value to program evaluators as well as program designers.

I wonder, however, how well these methods fit the world of grant-funded programs.  Increasingly, the guidelines for grant proposals put forth by funding agencies specify the core elements of a program’s design.  It is common for funders to specify the minimum number of contact hours, desired length of  service, and required service delivery methods.  When this is the case, designers may have little latitude to innovate, closing off opportunities to improve quality and efficiency.

Evaluation moment #4: Suzi constantly challenged us to specify how, where, and why we would measure the impact of the social enterprises we were discussing.  It was nice to see someone advocating for evaluation “baked into” program designs.  The participants were receptive, but they seemed somewhat daunted by the challenge of measuring impact.

Next, I attended Taking Education Digital: The Impact of Sharing KnowledgeChris Dede (Harvard Graduate School of Education) moderated.  I have always found his writing insightful and thought provoking, and he did not disappoint today. He provided a clear, compelling call for using technology to transform education.

His line of reasoning, as I understand it, is this: the educational system, as currently structured, lacks the capacity to meet federal and state mandates to increase (1) the quality of education delivered to students and (2) desired high school and college graduation rates.  Technology can play a transformational role by increasing the quality and capacity of the educational system.

Steve Carson followed by describing his work with MIT OpenCourseWare, which illustrated very nicely the distinction between innovation and transformation.  MIT OpenCourseWare was, at first, a humble idea–use the web to make it easier for MIT students and faculty to share learning-related materials.  Useful, but not innovative (as the word is typically used).

It turned out that the OpenCourseWare materials were being used by a much larger, more diverse group of formal and informal learners for wonderful, unanticipated educational purposes.  So without intending, MIT had created a technology with none of the trappings of innovation yet tremendous potential to be transformational.

The moral of the story: social impact can be achieved in unexpected ways, and in cultures that value innovation, the most unexpected way is to do something unexceptional exceptionally well.

Next, Chris Sprague (OpenStudy) discussed his social learning start up.  OpenStudy connects students to each other–so far 150,000 from 170 countries–in ways that promote learning.  Think of it as a worldwide study hall.

Social anything is hot in the tech world, but this is more than Facebook dressed in a scholar’s robe.  The intent is to create meaningful interactions around learning, tap expertise, and spark discussions that build understanding.  Think about how much you can learn about a subject simply by having a cup of coffee with an expert.  Imagine how much more you could learn if you were connected to more experts and did not need to sit next to them in a cafe to communicate.

The Pitch for Change took place in the afternoon.  It was the culmination of a process in which young social entrepreneurs give “elevator pitches” describing new ventures.  Those with the best pitches are selected to move on to the next round, where they make another pitch.

To my eyes, the final round combined the most harrowing elements of job interviews and Roman gladiatorial games–one person enters the arena, fights for survival for three minutes, and then looks to the crowd for thumbs up or down (see the picture at the top of this entry). Of course, they don’t use thumbs–that would too BC (before connectivity).  Instead, they use smartphones to vote via the web.

At the end, the winners were given big checks (literally, the checks were big; the dollar amounts, not so much).

But winners receive more than a little seed capital.  The top two winners are fast-tracked to the semifinal round of the 2013 Echoing Green Fellowship, the top four winners are fast-tracked to the semifinal round of the 2012 Dell Social Challenge, and the project that best makes use of technology to solve a social or environmental problem wins the Dell Technology Award.  Not bad for a few minutes in the arena.

Afterward, Dr. Judith Rodin, President of the Rockefeller Foundation, made the afternoon keynote speech, which focused on innovation.  She is a very good speaker and the audience was eager to hear about the virtues of new ideas.  It went over well.

Evaluation Moment 5: Dr. Rodin made the case for measuring social impact.  She described it as essential to traditional philanthropy and more recent efforts around social impact investing.  She noted that Rockefeller is developing its capacity in this area, however, evaluation remains a tough nut to crack.

The last session of the day was fantastic–and not just because an evaluator was on the panel.  It was entitled If at First You Don’t Succeed: The Importance of Prototyping and Iteration in Poverty Alleviation.  Prototyping is not just a subject of interest for me, it is a way of life.

Mike North (ReAllocate) discussed how he leverages volunteers–individuals and corporations–to prototype useful, innovative products.  In particular, he described his ongoing efforts to prototype an affordable corrective brace for children in developing countries who are born with clubfoot.  You can learn more about it in this video.

Timothy Prestero (Design that Matters) walked us through the process he used to prototype the Firefly.  About 60% of newborns in developing countries suffer from jaundice, and about 10% of these go on to suffer brain damage or another disability.  The treatment is simple–exposure to blue light.  Firefly is the light source.

What is so hard about designing a lamp that shines a blue light?  Human behavior.

For example, hospital workers often put more than one baby in the same phototherapy device, which promotes infectious disease.  Consequently, Firefly needed to be designed in such a way that only one baby could be treated at a time.  It also needed to be inexpensive in order to address the root cause of the problem behavior–too few devices in hospitals.  Understanding these behaviors, and designing with them in mind, requires lengthy prototyping.

Molly Kinder described her work at Development Innovation Ventures (DIV), a part of USAID.  DIV provides financial and other support to innovative projects selected through a competitive process.  In many ways, it looks more like a new-style venture fund than part of a government agency.  And DIV rigorously evaluates the impact of the projects it supports.

Evaluation moment #5: Wow, here is a new-style funder routinely doing high quality evaluations–including but not limited to randomized control trials–in order t0 scale projects strategically.

Shawn Powers, from the Jameel Poverty Action Lab at MIT (J-PAL), talked about J-PAL’s efforts to conduct randomized trials in developing countries.  Not surprisingly, I am a big fan of J-PAL, which is dedicated to finding effective ways of improving the lives of the poor and bringing them to scale.

Looking back on the day:  The tight connection between design and evaluation was a prominent theme.  While exploring the theme, the discussion often turned to how evaluation can help social enterprises scale up.  It seems to me that we first need t0 scale up rigorous evaluation of social enterprises.  The J-PAL model is a good one, but it isn’t possible for academic institutions to scale up fast enough or large enough to meet the need.  So what do we do?

Leave a comment

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation