Over my morning coffee, I found myself staring at this bulldog graph in the New York Times Magazine (12/11/11). Something was wrong. At first I couldn’t put my finger on it. Then it hit me—the relative size of the two bulldogs couldn’t possibly be correct.
I did a little forensic data analysis (that is, I used a ruler to measure the bodies of the bulldogs and for each computed the area of an ellipse—it turns out that geometry is useful). As I suspected, the area of the bulldog on the right is too small. If the bulldog on the left represents 58% of responses, then the bulldog on the right represents only about 30%. Oops.
Here is how large the bulldog should be. Quite a difference.
The heights of the bulldogs, as they originally appeared, were proportionally correct. That made me wonder. If you change the size of an image using most software, it changes the length and width proportionally—not the area. Is that how the error was made?
The Times wouldn’t make a mistake like that, I reasoned. Maybe the image was supposed to be a stylized bar graph (but in that case the width of the images should have remained constant). In any event, the graph addressed a trivial topic. I went back to my coffee confident in my belief that the New York Times would never make such a blunder on a important issue.
Then I turned the page and found this.
The same mistake. This time the graph was related to an article on US banking policy, hardly a trivial topic. The author wanted to impress upon the reader that a few banks control a substantial share of the market. A pity the image shows the market shares to be far smaller than they really are.
The image below illustrates the nature of the error—confusing proportional change in diameter for a proportional change in the area of a circle.
Below is a comparison of the original inaccurate graph and an accurate revision that I quickly constructed. Note that the area of the original red circle represents a market share of just under 20%—less than half its nominal value.
And here is an alternative graphic using squares instead of circles. A side-by-side presentation of Venn diagrams may allow readers to compare the relative size of market shares more easily than overlaid shapes.
I did a bit of digging and found another person noticed the error and pointed it out to the Times online (you really need to dig to find the comment). To date, the inaccurate graph is still on the Times website and I have not found a printed correction.
Apparently, checking facts does not include checking math.
Data-Free Evaluation
George Bernard Shaw quipped, “If all economists were laid end to end, they would not reach a conclusion.” However, economists should not be singled out on this account — there is an equal share of controversy awaiting anyone who uses theories to solve social problems. While there is a great deal of theory-based research in the social sciences, it tends to be more theory than research, and with the universe of ideas dwarfing the available body of empirical evidence, there tends to be little if any agreement on how to achieve practical results. This was summed up well by another master of the quip, Mark Twain, who observed that the fascinating thing about science is how “one gets such wholesale returns of conjecture out of such a trifling investment of fact.”
Recently, economists have been in the hot seat because of the stimulus package. However, it is the policymakers who depended on economic advice who are sweating because they were the ones who engaged in what I like to call data-free evaluation. This is the awkward art of judging the merit of untried or untested programs. Whether it takes the form of a president staunching an unprecedented financial crisis, funding agencies reviewing proposals for new initiatives, or individuals deciding whether to avail themselves of unfamiliar services, data-free evaluation is more the rule than the exception in the world of policies and programs. Continue reading →
3 Comments
Filed under Commentary, Design, Evaluation, Program Design, Program Evaluation, Research
Tagged as data-free evaluation, obama, policy development, Program Design, Program Evaluation, theory-based evaluation