Category Archives: Design

Tragic Graphic: The New York Times Checks Facts, Not Math

Over my morning coffee, I found myself staring at this bulldog graph in the New York Times Magazine (12/11/11).  Something was wrong.  At first I couldn’t put my finger on it.  Then it hit me—the relative size of the two bulldogs couldn’t possibly be correct.

I did a little forensic data analysis (that is, I used a ruler to measure the bodies of the bulldogs and for each computed the area of an ellipse—it turns out that geometry is useful).  As I suspected, the area of the bulldog on the right is too small.  If the bulldog on the left represents 58% of responses, then the bulldog on the right represents only about 30%.  Oops.

Here is how large the bulldog should be.  Quite a difference.

The heights of the bulldogs, as they originally appeared, were proportionally correct.  That made me wonder.  If you change the size of an image using most software, it changes the length and width proportionally—not the area.  Is that how the error was made?

The Times wouldn’t make a mistake like that, I reasoned.  Maybe the image was supposed to be a stylized bar graph (but in that case the width of the images should have remained constant).  In any event, the graph addressed a trivial topic.  I went back to my coffee confident in my belief that the New York Times would never make such a blunder on a important issue.

Then I turned the page and found this.

The same mistake.  This time the graph was related to an article on US banking policy, hardly a trivial topic.  The author wanted to impress upon the reader that a few banks control a substantial share of the market.  A pity the image shows the market shares to be far smaller than they really are.

The image below illustrates the nature of the error—confusing proportional change in diameter for a proportional change in the area of a circle.

Below is a comparison of the original inaccurate graph and an accurate revision that I quickly constructed.  Note that the area of the original red circle represents a market share of just under 20%—less than half its nominal value.

And here is an alternative graphic using squares instead of circles.  A side-by-side presentation of Venn diagrams may allow readers to compare the relative size of market shares more easily than overlaid shapes.

I did a bit of digging and found another person noticed the error and pointed it out to the Times online (you really need to dig to find the comment).  To date, the inaccurate graph is still on the Times website and I have not found a printed correction.

Apparently, checking facts does not include checking math.

14 Comments

Filed under Design, Visualcy

EvalBlog Launched!

Welcome to EvalBlog.  This where I—and a growing number of guest bloggers—will share our experiences designing and evaluating social, educational, cultural, and environmental programs.  I will draw on my experiences at Gargani + Company.  Guest bloggers will draw on their experiences in various organizations and roles.  Together, we hope to provide a broad view of program design and evaluation. Continue reading

1 Comment

Filed under Design, Evaluation, Gargani News, Program Design, Program Evaluation

Big Changes in 2012!

Image

A new blog for the new year — EvalBlog.comlaunching January 1, 2012.  Join us for a wide-ranging discussion of program design and evaluation, including guest bloggers, conference blogs, and much more!

Leave a comment

Filed under Design, Evaluation, Gargani News, Program Design, Program Evaluation

Good versus Eval

After another blogging hiatus, the battle between good and eval continues.  Or at least my blog is coming back online as the American Evaluation Association’s Annual Conference in San Antonio (November 10-14) quickly approaches.

I remember that twenty years ago evaluation was widely considered the enemy of good because it took resources away from service delivery.  Now evaluation is widely considered an essential part of service delivery, but the debate over what constitutes a good program and a good evaluation continues.  I will be joining the fray when I make a presentation as part of a session entitled Improving Evaluation Quality by Improving Program Quality: A Theory-Based/Theory-Driven Perspective (Saturday, November 13, 10:00 AM, Session Number 742).  My presentation is entitled The Expanding Profession: Program Evaluators as Program Designers, and I will discuss how program evaluators are increasingly being called upon to help design the programs they evaluate, and why that benefits program staff, stakeholders, and evaluators.  Stewart Donaldson is my co presenter (The Relationship between Program Design and Evaluation), and our discussants are Michael Scriven, David Fetterman, and Charles Gasper.  If you know these names, you know to expect a “lively” (OK, heated) discussion.

If you are an evaluator in California, Oregon, Washington, New Mexico, Hawaii, any other place west of the Mississippi, or anywhere that is west of anything, be sure to attend the West Coast Evaluators Reception Thursday, November 11, 9:00 pm at the Zuni Grill (223 Losoya Street, San Antonio, TX 78205) co-sponsored by San Francisco Bay Area Evaluators and Claremont Graduate University.  It is a conference tradition and a great way to network with colleagues.

More from San Antonio next week!

2 Comments

Filed under Design, Evaluation Quality, Gargani News, Program Design, Program Evaluation

Data-Free Evaluation

 curves

George Bernard Shaw quipped, “If all economists were laid end to end, they would not reach a conclusion.”  However, economists should not be singled out on this account — there is an equal share of controversy awaiting anyone who uses theories to solve social problems.  While there is a great deal of theory-based research in the social sciences, it tends to be more theory than research, and with the universe of ideas dwarfing the available body of empirical evidence, there tends to be little if any agreement on how to achieve practical results.  This was summed up well by another master of the quip, Mark Twain, who observed that the fascinating thing about science is how “one gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Recently, economists have been in the hot seat because of the stimulus package.  However, it is the policymakers who depended on economic advice who are sweating because they were the ones who engaged in what I like to call data-free evaluation.  This is the awkward art of judging the merit of untried or untested programs. Whether it takes the form of a president staunching an unprecedented financial crisis, funding agencies reviewing proposals for new initiatives, or individuals deciding whether to avail themselves of unfamiliar services, data-free evaluation is more the rule than the exception in the world of policies and programs. Continue reading

3 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research

Conflicts as Conflicting Theories of the World

nyt_2009_01_25

Theories are like bellybuttons-everybody has one and all are surprisingly different.  Last Sunday Scott Atran and Jeremy Ginges wrote an opinion piece for the New York Times in which they described their research on beliefs about conflict and peace in the Middle East.  In brief, they argued that what many outsiders consider rational and logical solutions to the Israeli-Palestinian conflict, insiders consider irrational and illogical.  The reason has largely to do with sacred beliefs.  In spite of the name, these are not religious beliefs, per se, but rather any deeply held beliefs that sit at the core of our world views and are highly resistant to change.

In an earlier post I described beliefs in general as a pile of pick-up sticks, with the most resistant to change-the sacred beliefs-at the bottom of the pile.  Accordingly, altering sacred beliefs in any significant way will disturb all the rest.  At best this is exhausting, at worst traumatic.

Given the variety of beliefs that abound regarding social problems and solutions, it seems that program designers and policymakers are always treading upon someone’s sacred beliefs.  One of the practical questions we have been wrestling with is how to help groups of people with disparate world views reach consensus about programs and policies.  With the approach that we have been developing, we engage a broad range of stakeholders in a simple, iterative process in which they reveal what they believe and why.

Leave a comment

Filed under Commentary, Design, Program Design

Theory Building and Theory-Based Evaluation

einstein_tbe

When we are convinced of something, we believe it. But when we believe something, we may not have been convinced. That is, we do not come by all our beliefs through conscious acts of deliberation. It’s a good thing, too, for if we examined the beliefs underlying our every action we wouldn’t get anything done.

When we design or evaluate programs, however, the beliefs underlying these actions do merit close examination. They are our rationale, our foothold in the invisible; they are what endow our struggle to change the world with possibility. Continue reading

2 Comments

Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research