Tragic Graphic: The Wall Street Journal Lies with Statistics?

Believe it or not, the Wall Street Journal provides another example of an inaccurate circular graph.  This time the error so closely parallels an example from Darrell Huff’s classic How to Lie with Statistics that I find myself wondering—intentional deception or innocent blunder?

The image above comes from Huff’s book.  The moneybag on the left represents the average weekly salary of carpenters in the fictional country of Rotundia.  The bag on the right, the average weekly salary of carpenters in the US.

Based on the graph, how much more do carpenters in the US earn?  Twice?  Three times?  Four times?  More?

The correct answer is that they earn twice as much, but the graph gives the impression that the difference is greater than that.  The heights of the bags are proportionally correct but their areas are not.  Because we tend to focus on the areas of shapes, graphics like this can easily mislead readers.

Misleading the reader, of course, was Huff’s intention.  As he put it:

…I want you to infer something, to come away with an exaggerated impression, but I don’t want to be caught at my tricks.

What were the intentions of the Wall Street Journal this Saturday (1/21/2012) when it previewed Charles Murray’s new book Coming Apart?

In the published preview, Murray made a highly qualified claim—the median family income across 14 of the most elite places to live in 1960 rose from $84,000 in 1960 to $163,000 in 2000, after adjusting incomes to reflect today’s purchasing power.

Those cumbersome qualifications take the oomph right out of the claim.  Too long to be a provocative sound bite, the Journal refashioned it into a provocative sight bite.  Wow, those incomes really grew!

But not as much as the graph suggests.  The text states that the median salary just about doubled.  The picture indicates that it quadrupled.  It’s Huff’s moneybag trick—even down to the relative proportion of  salaries!

Here is a comparison of the inaccurate graph with an accurate version I constructed.  The accurate graph is far less provocative.

As a rule, the areas of circles are difficult for people to compare by eye.  In fact, using the area of any two-dimensional shape to represent one-dimensional data is probably a bad idea.  Not only do interpretations vary depending on the shape that is used, but they vary depending on the relative placement of the shapes.

To illustrate these points, here are six alternative representations of Murray’s data.  Which, if any, are lies?


Filed under Design, Visualcy

The African Evaluation Association Conference Comes to a Close (#4)

From Tarek Azzam in Accra, Ghana: I have had the opportunity to attend many conferences in the US, Canada, Europe, and Australia.  All were informative and invigorating in their own way, but the AfrEA conference was different.  The issues facing the African continent are immense.  Yet I was continually uplifted by the determination, skill, and caring of the people working to make a difference.  It reminded me that evaluation can be more than an academic exercise or bureaucratic requirement.  Evaluation can be a fundamental tool for development that carries with it our future aspirations for democracy, equity, and human rights.

This is exemplified by the evaluation efforts of Slum Dwellers International (SDI), an organization in which evaluations are carried out by and for the people living in 35 different slums across the world. SDI mobilizes its members through the practice of evaluation—they collect interviews, surveys, and other forms of data and use that information to directly negotiate with governments to improve the conditions of their communities.  SDI has over 4 million members who have created a culture in which evaluation knowledge is power.

I was intrigued by the World Bank’s evaluation capacity building efforts.  Along with other partnering organizations, they are working on a new initiative to establish evaluation training centers across the African continent and other developing regions. They will be called CLEAR centers (Regional Centers for Learning on Evaluation and Results) and eventually hope to establish them as degree granting programs that offer MAs and perhaps even PhDs in monitoring and evaluation.  There appears to be support for the initiative but it remains to be seen what the final program will look like.

My fellow presenters and I had the opportunity to share the results of our workshop as part of a conference panel. The session was not as well attended as the workshop (approximately 10 people) but the conversations were productive. We discussed the list of evaluator competencies and principles that we generated.  The reaction was positive and we have been given the responsibility of taking the next step.  It feels like a big step.  There tends to be more talk than action in the development community. I don’t want this to fizzle out.  Thankfully, there are workshop participants and presenters who are eager to push the work forward with me.

Now that the conference is over, I have been reflecting on the experience.  More than ever I believe that we, as a field, can have an enormous impact on the governments, institutions, communities, and people dedicated to improving the lives of others.  That is why I got into evaluation, and the last few days have reinforced my commitment to the field.

Thank you for following my blog posts.  And thank you to John Gargani for giving me the opportunity to share my experiences at AfrEA.


Filed under Evaluation

Evaluation Capacity Building at the African Evaluation Association Conference (#3)

From Tarek Azzam in Accra, Ghana: Yesterday was the first day of the AfrEA Conference and it was busy.  I, along with a group of colleagues, presented a workshop on developing evaluation capacity.  It was well attended—almost 60 people—and the discussion was truly inspiring.  Much of our conversation related to how development programs are typically evaluated by experts who are not only external to the organization, but external to the country.  Out-of-country evaluators typically know a great deal about evaluation, and often they do a fantastic job, but their cultural competencies vary tremendously, severely limiting the utility of their work.  When out-of-country evaluators complete their evaluations, they return home and their evaluation expertise leaves with them.  Our workshop participants said they wanted to build evaluation capacity in Africa for Africans because it was the best way to strengthen evaluations and programs.  So we facilitated a discussion of how to make that happen.

At first, the discussion was limited to what participants believed were the deficits of local African evaluators.  This continued until one attendee stood up and passionately described what local evaluators bring to an evaluation that is unique and advantageous.   Suddenly, the entire conversation turned around and participants began discussing how a deep understanding of local contexts, governmental systems, and history improves every step of the evaluation process, from the feasibility of designs to the use of results.  This placed the deficiencies of local evaluators listed previously—most of which were technical—in crisp perspective.  You can greatly advance your understanding of quantitative methods in a few months; you cannot expect to build a deep understanding of a place and its people in the same time.

The next step is to bring the conversation we had in the workshop to the wider AfrEA Conference.  I will begin that process in a panel discussion that takes place later today. My objective is to use the panel to develop a list of strategic principles that can guide future evaluation capacity building efforts. If the principles reflect the values, strengths, and knowledge of those who want to develop their capacity, then the principles can be used to design meaningful capacity building efforts.  It should be interesting—I will keep you posted.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

The African Evaluation Association Conference Begins (#2)

From Tarek Azzam in Accra, Ghana: The last two days have been hectic on many fronts.  Matt and I spent approximately 4 hours on Monday trying to work out technical bugs.  Time well spent as it looks like we will be able to stream parts of the conference live.  You can find the schedule and links here.

I have had the chance to speak with many conference participants from across Africa at various social events.  In almost every conversation the same issue keeps emerging—the disconnect between what donors expect to see on the ground (and expect to be measured) and what grantees are actually seeing on the ground (and do not believe they can measure). Although this is a common issue in the US where I do much of my work, it appears to be more pronounced in the context of development programs.

This tension is a source of frustration for many of the people with whom I speak—they truly believe in the power of evaluation to improve programs, promote self-reflection, and achieve social change. However, demands from donors have pushed them to focus on evaluation questions and measures that are not necessarily useful to their programs or the people their programs benefit.  I am interested in speaking with some of the donors attending the conference to get their perspective on this issue. I believe that donors may be looking for impact measures that can be aggregated across multiple grantees, and this may lead to the selection of measures that are less relevant to any single grantee, hence the tension.

I plan on keeping you updated on further conversations and discussions as they occur. Tomorrow I will be helping to conduct a workshop on building evaluation capacity within Africa, and really engaging participants as they help us come up with a list of competencies and capacities that are uniquely relevant to the development/African context. Based on the lively conversations I have had so far, I anticipate a rich and productive exchange of ideas tomorrow.  I will share them with you as soon as I can.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

From the African Evaluation Association Conference (#1)

Hello my name is Tarek Azzam, and I am an Assistant Professor at Claremont Graduate University. Over the next few days I will blog about my experiences at the 6th Biennial AfrEA Conference in Accra, Ghana.  The theme of the conference is “Rights and Responsibility in Development Evaluation.”  As I write this, I await the start of the conference tomorrow, January 9.

The conference is hosted by the African Evaluation Association (AfrEA) and Co-Organized by the Ghana Monitoring & Evaluation Forum (GMEF).  For those who live or work outside of Africa, these may be unfamiliar organizations.  I encourage you to learn more about them and other evaluation associations around the world through the International Organisation for Cooperation in Evaluation (IOCE).

Ross Conner, Issaka Traore, Sulley Gariba, Marie Gervais, and I will present a half day workshop on developing evaluation capacity within Africa, along with a panel discussion.

I am also working with Matt Galen to broadcast via the internet some of the keynote sessions at the conference and share them with others.  I will send links as they become available.

I am very excited about the start of the conference.  It is a new venue for me and I look forward to sharing my experiences with you.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

Tragic Graphic: The New York Times Checks Facts, Not Math

Over my morning coffee, I found myself staring at this bulldog graph in the New York Times Magazine (12/11/11).  Something was wrong.  At first I couldn’t put my finger on it.  Then it hit me—the relative size of the two bulldogs couldn’t possibly be correct.

I did a little forensic data analysis (that is, I used a ruler to measure the bodies of the bulldogs and for each computed the area of an ellipse—it turns out that geometry is useful).  As I suspected, the area of the bulldog on the right is too small.  If the bulldog on the left represents 58% of responses, then the bulldog on the right represents only about 30%.  Oops.

Here is how large the bulldog should be.  Quite a difference.

The heights of the bulldogs, as they originally appeared, were proportionally correct.  That made me wonder.  If you change the size of an image using most software, it changes the length and width proportionally—not the area.  Is that how the error was made?

The Times wouldn’t make a mistake like that, I reasoned.  Maybe the image was supposed to be a stylized bar graph (but in that case the width of the images should have remained constant).  In any event, the graph addressed a trivial topic.  I went back to my coffee confident in my belief that the New York Times would never make such a blunder on a important issue.

Then I turned the page and found this.

The same mistake.  This time the graph was related to an article on US banking policy, hardly a trivial topic.  The author wanted to impress upon the reader that a few banks control a substantial share of the market.  A pity the image shows the market shares to be far smaller than they really are.

The image below illustrates the nature of the error—confusing proportional change in diameter for a proportional change in the area of a circle.

Below is a comparison of the original inaccurate graph and an accurate revision that I quickly constructed.  Note that the area of the original red circle represents a market share of just under 20%—less than half its nominal value.

And here is an alternative graphic using squares instead of circles.  A side-by-side presentation of Venn diagrams may allow readers to compare the relative size of market shares more easily than overlaid shapes.

I did a bit of digging and found another person noticed the error and pointed it out to the Times online (you really need to dig to find the comment).  To date, the inaccurate graph is still on the Times website and I have not found a printed correction.

Apparently, checking facts does not include checking math.


Filed under Design, Visualcy

Evaluation: An Invisible Giant

When I am asked what I do for a living, I expect that it might take a little explaining.  Most people are unaware of program evaluation, including many who work for organizations that implement programs.

My short answer is that I help clients—nonprofit organizations, foundations, corporations, museums, and schools—determine how effective they are and how they can be more effective.   Often this leads to more questions and longer conversations that I quite enjoy, yet I am left wondering why evaluation is so little known given the size of the field.

How big is the field of evaluation?  Ironically, that is not a statistic that anyone tracks.  To get a handle on it, consider the nonprofit sector, which is closely associated with programs intended to further a social mission.

According to the Urban Institute, there were roughly 1.5 million nonprofit organizations in the United States in 2011, up by 25% over the preceding 10 years.  In 2010, nonprofit organizations produced products and services worth roughly $779 billion, which is 5.4 percent of GDP.  As a point of comparison that is more than the US spends on its military, which accounts for only 4.7% of GDP.

Nonprofits, however, are not the only organizations that implement programs.  Universities, public school systems, government agencies, hospitals, and a growing number of for-profit companies do so as well.  If we take into account all organizations that implement programs—what Paul Light calls social benefit organizations—it would easily double or triple our prior estimate based on nonprofit organizations alone.  That means that goods and services produced by the social benefit sector could be on par with those of healthcare—a whopping 16% of GDP.

Who figures out whether that whopping slice of GDP is benefiting society?  Who helps design the programs represented by that slice?  Who works to build the capacity of social benefit organizations to achieve their missions?  Countless evaluators.  Yet, program evaluation remains hidden from public view, an invisible giant unnoticed by most.  Isn’t it time that changed?


Filed under Evaluation, Program Evaluation

EvalBlog Launched!

Welcome to EvalBlog.  This where I—and a growing number of guest bloggers—will share our experiences designing and evaluating social, educational, cultural, and environmental programs.  I will draw on my experiences at Gargani + Company.  Guest bloggers will draw on their experiences in various organizations and roles.  Together, we hope to provide a broad view of program design and evaluation. Continue reading

1 Comment

Filed under Design, Evaluation, Gargani News, Program Design, Program Evaluation

Big Changes in 2012!


A new blog for the new year — EvalBlog.comlaunching January 1, 2012.  Join us for a wide-ranging discussion of program design and evaluation, including guest bloggers, conference blogs, and much more!

Leave a comment

Filed under Design, Evaluation, Gargani News, Program Design, Program Evaluation

Where am I?

My colleagues and I have been busy developing new products and services.  Learn more about what we’ve been up to when we re-launch on January 1, 2012.  Stay tuned!

Leave a comment

Filed under Gargani News