Category Archives: Conference Blog

Measuring Impact: Integrating Evaluation & Design (Workshop May 19 in SF)

Interested in design for social change?  Curious about how to measure the social impact of your designs?  Check out my upcoming San Francisco workshop—Measuring Impact: Integrating Evaluation & Design–taking place on May 19 as part of CatapultLabs: Design Tools to Spark Social Change.

Come join in a day of hands-on labs with leading designers and organizations promoting social change.

Learn more about it at http://catapultlabs-2012.eventbrite.com/–space is limited.

Leave a comment

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Wharton “Creating Lasting Change” Conference

How can corporations promote the greater good?  Can they do good and be profitable?  How well can we measure the good they are doing?

These were some of the questions explored at a recent Wharton School Conference entitled Creating Lasting Change: From Social Entrepreneurship to Sustainability in Retail.  I provide a brief recap of the event.  Then I discuss why I believe program evaluators, program designers, and corporations have a great deal to learn from each other.

The Location

The conference took place at Wharton’s stunning new San Francisco campus.  By stunning I mean drop-dead gorgeous.  Here is one of its many views.

An Unusual and Effective Conference

The conference was jointly organized by three entities within the Wharton School—the Jay H. Baker Retailing Center, the Initiative for Global Environmental Leadership, and the Wharton Program for Social Impact.

When I first read this I scratched my head.  A conference that combined the interests of any two made sense to me.  Combining the interests of all three seemed like a stretch.  I found—much to my delight—that the conference worked very well because of its two-panel structure.

Panel 1 addressed the social and environmental impact of new ventures; Panel 2 addressed the impact of large, established corporations.  This offered an opportunity to compare and contrast new with old, small with large, and risk takers with the risk averse.

Fascinating and enlightening.  I explain why after I describe the panels.

Panel 1: Social Entrepreneurship/Innovation

The first panel considered how entrepreneurs and venture capitalists can promote positive environmental and social change.

  • Andrew D’Souza, Chief Revenue Officer at Top Hat Monocle, discussed how his company developed web-based clickers for classrooms and online homework tools that are designed to promote learning—a social benefit that can be directly monetized.
  • Mike Young, Director of Technology Development at Innova Dynamics, described how his company’s social mission drives their development and commercialization of “disruptive advanced materials technologies for a sustainable future.”
  • Amy Errett, Partner at the venture capital firm Maveron, emphasized the firm’s belief that businesses focusing on a social mission tend to achieve financial success.
  • Susie Lee, Principal at TBL Capital, outlined her firm’s patient capital approach, which favors companies that balance their pursuit of social, environmental, and financial objectives.
  • Raghavan Anand, Chief Financial Officer at One Million Lights, moderated the panel.

Panel 2: Sustainability/CSR in the Retail Industry

The second panel discussed how large, established companies impact society and the natural world, and what it means for a corporation to act responsibly.

Christy Consler, Vice President of Sustainability at Safeway Inc., made the case that the large grocer (roughly 1,700 stores and 180,000 employees) needs to focus on sustainable, socially responsible operations to ensure that it has dependable sources for its product—food—as the world population swells by 2 billion over the next 35 years.

Lori Duvall, Director of Operational Sustainability at eBay Inc., summarized eBay’s sustainability efforts, which include solar power installations, reusable packaging, and community engagement.

Paul Dillinger, Senior Director-Global Design at Levi Strauss & Co., made an excellent presentation on the social and environmental consequences—positive and negative—of the fashion industry, and how the company is working to make a positive impact.

Shauna Sadowski, Director of Sustainability at Annie’s (you know, the company that makes the cute organic, bunny-shaped mac and cheese), discussed how bringing natural foods to the marketplace motivates sustainable, community-centered operations.

Barbara Kahn moderated.  She wins the prize for having the longest title—the Patty & Jay H. Baker Professor, Professor of Marketing; Director, Jay H. Baker Retailing Center—and from what I could tell, she deserves every bit of the title.

Measuring Social Impact

I was thrilled to find corporations, new and old, concerned with making the world a better place.  Business in general, and Wharton in particular, have certainly changed in the 20 years since I earned my MBA.

The unifying theme of the panels was impact.  Inevitably, that discussion turned from how corporations were working to make social and environmental impacts to how they were measuring impacts.  When it did, the word evaluation was largely absent, being replaced by metrics, measures, assessments, and indicators.  Evaluation, as a field and a discipline, appears to be largely unknown to the corporate world.

Echoing what I heard at the Harvard Social Enterprise Conference (day 1 and day 2), impact measurement was characterized as nascent, difficult, and elusive.  Everyone wants to do it; no one knows how.

I find this perplexing.  Is the innovation, operational efficiency, and entrepreneurial spirit of American corporations insufficient to crack the nut of impact measurement?

Without a doubt, measuring impact is difficult—but not for the reasons one might expect.  Perhaps the greatest challenge is defining what one means by impact.  This venerable concept has become a buzzword, signifying both more an less than it should for different people in different settings.  Clarifying what we mean simplifies the task of measurement considerably.  In this setting, two meanings dominated the discussion.

One was the intended benefit of a product or service.  Top Hat Monocle’s products are intended to increase learning.  Annie’s foods are intended to promote health.  Evaluators are familiar with this type of impact and how to measure it.  Difficult?  Yes.  It poses practical and technical challenges, to be sure.  Nascent and elusive?  No.  Evaluators have a wide range of tools and techniques that we use regularly to estimate impacts of this type.

The other dominant meaning was the consequences of operations.  Evaluators are probably less familiar with this type of impact.

Consider Levi’s.  In the past, 42 liters of fresh water were required to produce one pair of Levi’s jeans.  According to Paul Dillinger, the company has since produced about 13 million pairs using a more water-efficient process, reducing the total water required for these jeans from roughly 546 million liters to 374 million liters—an estimated savings of 172 million liters.

Is that a lot?  The Institute of Medicine estimates that one person requires about 1,000 liters of drinking water per year (2.2 to 3 liters per day making a variety of assumptions)—so Levi’s saved enough drinking water for about 172,000 people for one year.  Not bad.

But operational impact is more complex than that.  Levi’s still used the equivalent yearly drinking water for 374,000 people in places where potable water may be in short supply.  The water that was saved cannot be easily moved where it may be needed more for drinking, irrigation, or sanitation.  If the water that is used for the production of jeans is not handled properly, it may contaminate larger supplies of fresh water, resulting in a net loss of potable water.  The availability of more fresh water in a region can change behavior in ways that negate the savings, such as attracting new industries that depend on water or inducing wasteful water consumption practices.

Is it difficult to measure operational impact?  Yes.  Even estimating something as tangible as water use is challenging.  Elusive?  No.  We can produce impact estimates, although they may be rough.  Nascent?  Yes and no.  Measuring operational impact depends on modeling systems, testing assumptions, and gauging human behavior.  Evaluators have a long history of doing these things, although not in combination for the purpose of measuring operational impact.

It seems to me that evaluators and corporations could learn a great deal from each other.  It is a shame these two worlds are so widely separated.

Designing Corporate Social Responsibility Programs

With all the attention given to estimating the value of corporate social responsibility programs, the values underlying them were not fully explored.  Yet the varied and often conflicting values of shareholders and stakeholders pose the most significant challenge facing those designing these programs.

Why do I say that?  Because it has been that way for over 100 years.

The concept of corporate social responsibility has deep roots.  In 1909, William Tolman wrote about a trend he observed in manufacturing.  Many industrialists, by his estimation, were taking steps to improve the working conditions, pay, health, and communities of their employees.  He noted that these unprompted actions had various motives—a feeling that workers were owed the improvements, unqualified altruism, or the belief that the efforts would lead to greater profits.

Tolman placed a great deal of faith in the last motive.  Too much faith.  Twentieth-century industrial development was not characterized by rational, profit-maximizing companies competing to improve the lot of stakeholders in order to increase the wealth of shareholders.  On the contrary, making the world a better place typically entailed tradeoffs that shareholders found unacceptable.

So these early efforts failed.  The primary reason was that their designs did not align the values of shareholders and stakeholders.

Can the values of shareholders and stakeholders be more closely aligned today?  I believe they can be.  The founders of many new ventures, like Top Hat Monocle and Innova Dynamics, bring different values to their enterprises.  For them, Tolman’s nobler motives—believing that people deserve a better life and a desire to do something decent in the world—are the cornerstones of their company cultures.  Even in more established organizations—Safeway and Levi’s—there appears to be a cultural shift taking place.  And many venture capital firms are willing to take a patient capital approach, waiting longer and accepting lower returns, if it means they can promote a greater social good.

This is change for the better.  But I wonder if we, like Tolman, are putting too much faith in win-win scenarios in which we imagine shareholders profit and stakeholders benefit.

It is tempting to conclude that corporate social responsibility programs are win-win.  The most visible examples, like those presented at this conference, are.  What lies outside of our field of view, however, are the majority of rational, profit-seeking corporations that are not adopting similar programs.  Are we to conclude that these enterprises are not as rational as they should be? Or have we yet to design corporate responsibility programs that resolve the shareholder-stakeholder tradeoffs that most companies face?

Again, there seems to be a great deal that program designers, who are experienced at balancing competing values, and corporations can learn from each other…if only the two worlds met.

1 Comment

Filed under Commentary, Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Harvard Social Enterprise Conference (Day 2)

What follows is a second series of short posts written while I attended the Social Enterprise Conference (#SECON12).  The conference (February 25-26) was presented by the Harvard Business School and the Harvard Kennedy School.

I spent much of the day attending a session entitled ActionStorm: A Workshop on Designing Actionable InnovationsSuzi Sosa, Executive Director of the Dell Social Innovation Challenge did a great job of introducing a design thinking process for those developing new social enterprises.  I plan to blog more about design thinking in a future post.

The approach presented in the workshop combined basic design thinking activities (mind mapping, logic modeling, and empathy mapping) that I believe can be of value to program evaluators as well as program designers.

I wonder, however, how well these methods fit the world of grant-funded programs.  Increasingly, the guidelines for grant proposals put forth by funding agencies specify the core elements of a program’s design.  It is common for funders to specify the minimum number of contact hours, desired length of  service, and required service delivery methods.  When this is the case, designers may have little latitude to innovate, closing off opportunities to improve quality and efficiency.

Evaluation moment #4: Suzi constantly challenged us to specify how, where, and why we would measure the impact of the social enterprises we were discussing.  It was nice to see someone advocating for evaluation “baked into” program designs.  The participants were receptive, but they seemed somewhat daunted by the challenge of measuring impact.

Next, I attended Taking Education Digital: The Impact of Sharing KnowledgeChris Dede (Harvard Graduate School of Education) moderated.  I have always found his writing insightful and thought provoking, and he did not disappoint today. He provided a clear, compelling call for using technology to transform education.

His line of reasoning, as I understand it, is this: the educational system, as currently structured, lacks the capacity to meet federal and state mandates to increase (1) the quality of education delivered to students and (2) desired high school and college graduation rates.  Technology can play a transformational role by increasing the quality and capacity of the educational system.

Steve Carson followed by describing his work with MIT OpenCourseWare, which illustrated very nicely the distinction between innovation and transformation.  MIT OpenCourseWare was, at first, a humble idea–use the web to make it easier for MIT students and faculty to share learning-related materials.  Useful, but not innovative (as the word is typically used).

It turned out that the OpenCourseWare materials were being used by a much larger, more diverse group of formal and informal learners for wonderful, unanticipated educational purposes.  So without intending, MIT had created a technology with none of the trappings of innovation yet tremendous potential to be transformational.

The moral of the story: social impact can be achieved in unexpected ways, and in cultures that value innovation, the most unexpected way is to do something unexceptional exceptionally well.

Next, Chris Sprague (OpenStudy) discussed his social learning start up.  OpenStudy connects students to each other–so far 150,000 from 170 countries–in ways that promote learning.  Think of it as a worldwide study hall.

Social anything is hot in the tech world, but this is more than Facebook dressed in a scholar’s robe.  The intent is to create meaningful interactions around learning, tap expertise, and spark discussions that build understanding.  Think about how much you can learn about a subject simply by having a cup of coffee with an expert.  Imagine how much more you could learn if you were connected to more experts and did not need to sit next to them in a cafe to communicate.

The Pitch for Change took place in the afternoon.  It was the culmination of a process in which young social entrepreneurs give “elevator pitches” describing new ventures.  Those with the best pitches are selected to move on to the next round, where they make another pitch.

To my eyes, the final round combined the most harrowing elements of job interviews and Roman gladiatorial games–one person enters the arena, fights for survival for three minutes, and then looks to the crowd for thumbs up or down (see the picture at the top of this entry). Of course, they don’t use thumbs–that would too BC (before connectivity).  Instead, they use smartphones to vote via the web.

At the end, the winners were given big checks (literally, the checks were big; the dollar amounts, not so much).

But winners receive more than a little seed capital.  The top two winners are fast-tracked to the semifinal round of the 2013 Echoing Green Fellowship, the top four winners are fast-tracked to the semifinal round of the 2012 Dell Social Challenge, and the project that best makes use of technology to solve a social or environmental problem wins the Dell Technology Award.  Not bad for a few minutes in the arena.

Afterward, Dr. Judith Rodin, President of the Rockefeller Foundation, made the afternoon keynote speech, which focused on innovation.  She is a very good speaker and the audience was eager to hear about the virtues of new ideas.  It went over well.

Evaluation Moment 5: Dr. Rodin made the case for measuring social impact.  She described it as essential to traditional philanthropy and more recent efforts around social impact investing.  She noted that Rockefeller is developing its capacity in this area, however, evaluation remains a tough nut to crack.

The last session of the day was fantastic–and not just because an evaluator was on the panel.  It was entitled If at First You Don’t Succeed: The Importance of Prototyping and Iteration in Poverty Alleviation.  Prototyping is not just a subject of interest for me, it is a way of life.

Mike North (ReAllocate) discussed how he leverages volunteers–individuals and corporations–to prototype useful, innovative products.  In particular, he described his ongoing efforts to prototype an affordable corrective brace for children in developing countries who are born with clubfoot.  You can learn more about it in this video.

Timothy Prestero (Design that Matters) walked us through the process he used to prototype the Firefly.  About 60% of newborns in developing countries suffer from jaundice, and about 10% of these go on to suffer brain damage or another disability.  The treatment is simple–exposure to blue light.  Firefly is the light source.

What is so hard about designing a lamp that shines a blue light?  Human behavior.

For example, hospital workers often put more than one baby in the same phototherapy device, which promotes infectious disease.  Consequently, Firefly needed to be designed in such a way that only one baby could be treated at a time.  It also needed to be inexpensive in order to address the root cause of the problem behavior–too few devices in hospitals.  Understanding these behaviors, and designing with them in mind, requires lengthy prototyping.

Molly Kinder described her work at Development Innovation Ventures (DIV), a part of USAID.  DIV provides financial and other support to innovative projects selected through a competitive process.  In many ways, it looks more like a new-style venture fund than part of a government agency.  And DIV rigorously evaluates the impact of the projects it supports.

Evaluation moment #5: Wow, here is a new-style funder routinely doing high quality evaluations–including but not limited to randomized control trials–in order t0 scale projects strategically.

Shawn Powers, from the Jameel Poverty Action Lab at MIT (J-PAL), talked about J-PAL’s efforts to conduct randomized trials in developing countries.  Not surprisingly, I am a big fan of J-PAL, which is dedicated to finding effective ways of improving the lives of the poor and bringing them to scale.

Looking back on the day:  The tight connection between design and evaluation was a prominent theme.  While exploring the theme, the discussion often turned to how evaluation can help social enterprises scale up.  It seems to me that we first need t0 scale up rigorous evaluation of social enterprises.  The J-PAL model is a good one, but it isn’t possible for academic institutions to scale up fast enough or large enough to meet the need.  So what do we do?

Leave a comment

Filed under Conference Blog, Design, Evaluation, Program Design, Program Evaluation

Conference Blog: The Harvard Social Enterprise Conference (Day 1)

What follows is a series of short posts written while I attended the Social Enterprise Conference (#SECON12).  The conference (February 25-26) was presented by the Harvard Business School and the Harvard Kennedy School.

What is a social enterprise?

The concept of a social enterprise is messy.  By various definitions, it can include:

  • A for-profit company that seeks to benefit society;
  • a nonprofit organization that uses business-like methods;
  • a foundation that employs market investing principles; and
  • a government agency that leverages the work of private-sector partners.

The concept of a social enterprise is disruptive. It blurs the lines separating organizations that do good for stakeholders, do well for shareholders, and do right by constituents.

The concept of a social enterprise is inspiring.  It can foster flexible, creative solutions to our most pressing problems.

The concept of a social enterprise is dangerous.  It can attach the patina of altruism to organizations motivated solely by profits.

The concept of a social enterprise is catching fire.  The evaluation community needs to learn how it fits into this increasingly common type of organization.

The conference started with a young entrepreneurs keynote panel that was moderated by Daniel Epstein (Unreasonable Institute).

Kavita Shukla of Fenugreen discussed the product she invented.  Amazing.  It is a piece of paper permeated with organic, biodegradable herbs.  So what?  It keeps produce fresh 2-4 times longer.  The potential social and financial impact of the product—especially in parts of the world where food is in short supply and refrigeration scarce—is tremendous. Watch a TED talk about it here.

Next, Taylor Conroy (Destroy Normal Consulting) discussed his fundraising platform that allows people to raise $10,000 in three hours for projects like building schools in developing countries.  Sound crazy?  Check it out here and decide for yourself.

Finally, Lauren Bush (FEED Projects) discussed how she has used the sale of FEED bags and other fashion items to provide over 60 million meals for children in need around the world.

Evaluation moment #1: The panelists were asked how they measured the social impact of their enterprises.  Disappointingly, they do not seem to be doing so in a systematic way beyond counting units of service provided or number of products sold—a focus on outputs, not outcomes.

The first session I attended had the provocative title Social Enterprise: Myth or Reality?: Measuring Social Impact and Attracting Capital. Jim Bildner did an outstanding job as moderator.  Panelists included Kimberlee Cornett (Kresge Foundation), Clara Miller (F. B. Heron Foundation), Margaret McKenna (Harvard Kennedy School), and David Wood (Hauser Center for Nonprofit Organizations).

The discussion addressed three questions.

Q: What is social enterprise?

A: It apparently can be anything, but it should be something that is more precisely defined.

Q: How are foundations and financial investors getting involved?

A: By making loans and taking equity stakes in social enterprises.  That promotes social impact through the enterprise and generates more cash to invest in other social enterprises.

Evalution moment #2: Q: How can the social impact of enterprises be measured?

A: It isn’t.  One panelist suggested that measuring social impact is such a tough nut to crack that, if someone could figure out how, it would make for a fantastic new social enterprise.  I was both shocked and flattered, given I have been doing just that for decades.  Why were there no evaluators on this panel?


Ami Dalal and Jo-Ann Tan of Acumen Fund conducted a “bootcamp” on the approach their firm uses to make social investments.  They focused on methods of due diligence and valuation (that is, how they attach a dollar value to a social enterprise).

I found their approach to measuring the economic impact of the their investments very interesting—perhaps evaluators would benefit from learning more about it.  There are details at their website.

Evaluation moment #3

When the topic of measuring the social impact of their investments came up, the presenters provided the most direct answer I have heard so far.  They always measure outputs—those are easy to measure and can indicate if something is going wrong.  In some cases they also measure outcomes (impacts) using randomized control trials.  Given the cost, they do this infrequently.

Looking back on the day

A social enterprise that measures social impact but does not measure financial success would be considered ridiculous.  Yet a social enterprise that measures financial success but does not measure social impact is not.  Why?

2 Comments

Filed under Conference Blog, Evaluation, Program Evaluation

Evaluation Capacity Building at the African Evaluation Association Conference (#3)

From Tarek Azzam in Accra, Ghana: Yesterday was the first day of the AfrEA Conference and it was busy.  I, along with a group of colleagues, presented a workshop on developing evaluation capacity.  It was well attended—almost 60 people—and the discussion was truly inspiring.  Much of our conversation related to how development programs are typically evaluated by experts who are not only external to the organization, but external to the country.  Out-of-country evaluators typically know a great deal about evaluation, and often they do a fantastic job, but their cultural competencies vary tremendously, severely limiting the utility of their work.  When out-of-country evaluators complete their evaluations, they return home and their evaluation expertise leaves with them.  Our workshop participants said they wanted to build evaluation capacity in Africa for Africans because it was the best way to strengthen evaluations and programs.  So we facilitated a discussion of how to make that happen.

At first, the discussion was limited to what participants believed were the deficits of local African evaluators.  This continued until one attendee stood up and passionately described what local evaluators bring to an evaluation that is unique and advantageous.   Suddenly, the entire conversation turned around and participants began discussing how a deep understanding of local contexts, governmental systems, and history improves every step of the evaluation process, from the feasibility of designs to the use of results.  This placed the deficiencies of local evaluators listed previously—most of which were technical—in crisp perspective.  You can greatly advance your understanding of quantitative methods in a few months; you cannot expect to build a deep understanding of a place and its people in the same time.

The next step is to bring the conversation we had in the workshop to the wider AfrEA Conference.  I will begin that process in a panel discussion that takes place later today. My objective is to use the panel to develop a list of strategic principles that can guide future evaluation capacity building efforts. If the principles reflect the values, strengths, and knowledge of those who want to develop their capacity, then the principles can be used to design meaningful capacity building efforts.  It should be interesting—I will keep you posted.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

The African Evaluation Association Conference Begins (#2)

From Tarek Azzam in Accra, Ghana: The last two days have been hectic on many fronts.  Matt and I spent approximately 4 hours on Monday trying to work out technical bugs.  Time well spent as it looks like we will be able to stream parts of the conference live.  You can find the schedule and links here.

I have had the chance to speak with many conference participants from across Africa at various social events.  In almost every conversation the same issue keeps emerging—the disconnect between what donors expect to see on the ground (and expect to be measured) and what grantees are actually seeing on the ground (and do not believe they can measure). Although this is a common issue in the US where I do much of my work, it appears to be more pronounced in the context of development programs.

This tension is a source of frustration for many of the people with whom I speak—they truly believe in the power of evaluation to improve programs, promote self-reflection, and achieve social change. However, demands from donors have pushed them to focus on evaluation questions and measures that are not necessarily useful to their programs or the people their programs benefit.  I am interested in speaking with some of the donors attending the conference to get their perspective on this issue. I believe that donors may be looking for impact measures that can be aggregated across multiple grantees, and this may lead to the selection of measures that are less relevant to any single grantee, hence the tension.

I plan on keeping you updated on further conversations and discussions as they occur. Tomorrow I will be helping to conduct a workshop on building evaluation capacity within Africa, and really engaging participants as they help us come up with a list of competencies and capacities that are uniquely relevant to the development/African context. Based on the lively conversations I have had so far, I anticipate a rich and productive exchange of ideas tomorrow.  I will share them with you as soon as I can.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

From the African Evaluation Association Conference (#1)

Hello my name is Tarek Azzam, and I am an Assistant Professor at Claremont Graduate University. Over the next few days I will blog about my experiences at the 6th Biennial AfrEA Conference in Accra, Ghana.  The theme of the conference is “Rights and Responsibility in Development Evaluation.”  As I write this, I await the start of the conference tomorrow, January 9.

The conference is hosted by the African Evaluation Association (AfrEA) and Co-Organized by the Ghana Monitoring & Evaluation Forum (GMEF).  For those who live or work outside of Africa, these may be unfamiliar organizations.  I encourage you to learn more about them and other evaluation associations around the world through the International Organisation for Cooperation in Evaluation (IOCE).

Ross Conner, Issaka Traore, Sulley Gariba, Marie Gervais, and I will present a half day workshop on developing evaluation capacity within Africa, along with a panel discussion.

I am also working with Matt Galen to broadcast via the internet some of the keynote sessions at the conference and share them with others.  I will send links as they become available.

I am very excited about the start of the conference.  It is a new venue for me and I look forward to sharing my experiences with you.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation