Tag Archives: Program Evaluation

A National Holiday and US Postage Stamp for Evaluation

Evaluation is an invisible giant, a profession that impacts society on a grand scale yet remains unseen.  I want to change that.  But what can one person do to raise awareness about evaluation?

Go big.  And you can help.

A National Evaluation Holiday:  With the power vested in me by the greeting card industry, I am declaring…

February 15 is EVALentine’s Day!

This is a day when people around the world share their love of evaluation with each other.  Send a card, an email, or a copy of Guiding Principles for Evaluators to those near and dear, far and wide, internal and external.  Get the word out.  If the idea catches on, imagine how much exposure evaluation would receive.

A US Postage Stamp:  With the power vested in me by stamps.com, I have issued a US postage stamp for EVALentine’s Day.  Other holidays get stamps, why not ours?  The stamp I designed is based on the famous 1973 Valentine’s Day love stamp by Robert Indiana.  Now you can show your love of evaluation on the outside of an EVALentine card as well as the inside.

Here is the best part.

To kickoff EVALentine’s day, I will send an EVALentine’s card and a ready-to-use EVAL stamp to anyone, anywhere in the world.  For free.  Really.

Here is what you need to do.

(1) Visit the Gargani + Company website in the month of February.

(2) Click the Contact link in the upper right corner of the homepage.

(3) Send an email with EVALENTINE in the subject line and a SNAIL MAIL ADDRESS in the body.

(4) NOTE:  This offer is only valid for emails received during the month of February, 2012.

Don’t be left out on EVALentines day.  Drop me an email and get the word out!

12 Comments

Filed under Evaluation, Program Evaluation

Evaluation Capacity Building at the African Evaluation Association Conference (#3)

From Tarek Azzam in Accra, Ghana: Yesterday was the first day of the AfrEA Conference and it was busy.  I, along with a group of colleagues, presented a workshop on developing evaluation capacity.  It was well attended—almost 60 people—and the discussion was truly inspiring.  Much of our conversation related to how development programs are typically evaluated by experts who are not only external to the organization, but external to the country.  Out-of-country evaluators typically know a great deal about evaluation, and often they do a fantastic job, but their cultural competencies vary tremendously, severely limiting the utility of their work.  When out-of-country evaluators complete their evaluations, they return home and their evaluation expertise leaves with them.  Our workshop participants said they wanted to build evaluation capacity in Africa for Africans because it was the best way to strengthen evaluations and programs.  So we facilitated a discussion of how to make that happen.

At first, the discussion was limited to what participants believed were the deficits of local African evaluators.  This continued until one attendee stood up and passionately described what local evaluators bring to an evaluation that is unique and advantageous.   Suddenly, the entire conversation turned around and participants began discussing how a deep understanding of local contexts, governmental systems, and history improves every step of the evaluation process, from the feasibility of designs to the use of results.  This placed the deficiencies of local evaluators listed previously—most of which were technical—in crisp perspective.  You can greatly advance your understanding of quantitative methods in a few months; you cannot expect to build a deep understanding of a place and its people in the same time.

The next step is to bring the conversation we had in the workshop to the wider AfrEA Conference.  I will begin that process in a panel discussion that takes place later today. My objective is to use the panel to develop a list of strategic principles that can guide future evaluation capacity building efforts. If the principles reflect the values, strengths, and knowledge of those who want to develop their capacity, then the principles can be used to design meaningful capacity building efforts.  It should be interesting—I will keep you posted.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

The African Evaluation Association Conference Begins (#2)

From Tarek Azzam in Accra, Ghana: The last two days have been hectic on many fronts.  Matt and I spent approximately 4 hours on Monday trying to work out technical bugs.  Time well spent as it looks like we will be able to stream parts of the conference live.  You can find the schedule and links here.

I have had the chance to speak with many conference participants from across Africa at various social events.  In almost every conversation the same issue keeps emerging—the disconnect between what donors expect to see on the ground (and expect to be measured) and what grantees are actually seeing on the ground (and do not believe they can measure). Although this is a common issue in the US where I do much of my work, it appears to be more pronounced in the context of development programs.

This tension is a source of frustration for many of the people with whom I speak—they truly believe in the power of evaluation to improve programs, promote self-reflection, and achieve social change. However, demands from donors have pushed them to focus on evaluation questions and measures that are not necessarily useful to their programs or the people their programs benefit.  I am interested in speaking with some of the donors attending the conference to get their perspective on this issue. I believe that donors may be looking for impact measures that can be aggregated across multiple grantees, and this may lead to the selection of measures that are less relevant to any single grantee, hence the tension.

I plan on keeping you updated on further conversations and discussions as they occur. Tomorrow I will be helping to conduct a workshop on building evaluation capacity within Africa, and really engaging participants as they help us come up with a list of competencies and capacities that are uniquely relevant to the development/African context. Based on the lively conversations I have had so far, I anticipate a rich and productive exchange of ideas tomorrow.  I will share them with you as soon as I can.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

From the African Evaluation Association Conference (#1)

Hello my name is Tarek Azzam, and I am an Assistant Professor at Claremont Graduate University. Over the next few days I will blog about my experiences at the 6th Biennial AfrEA Conference in Accra, Ghana.  The theme of the conference is “Rights and Responsibility in Development Evaluation.”  As I write this, I await the start of the conference tomorrow, January 9.

The conference is hosted by the African Evaluation Association (AfrEA) and Co-Organized by the Ghana Monitoring & Evaluation Forum (GMEF).  For those who live or work outside of Africa, these may be unfamiliar organizations.  I encourage you to learn more about them and other evaluation associations around the world through the International Organisation for Cooperation in Evaluation (IOCE).

Ross Conner, Issaka Traore, Sulley Gariba, Marie Gervais, and I will present a half day workshop on developing evaluation capacity within Africa, along with a panel discussion.

I am also working with Matt Galen to broadcast via the internet some of the keynote sessions at the conference and share them with others.  I will send links as they become available.

I am very excited about the start of the conference.  It is a new venue for me and I look forward to sharing my experiences with you.

Leave a comment

Filed under Conference Blog, Evaluation, Program Evaluation

Santa Cause

I’ve been reflecting on the past year.  What sticks in my mind is how fortunate I am to spend my days working with people who have a cause.  Some promote their causes narrowly, for example, by ensuring that education better serves a group of children or that healthcare is available to the poorest families in a region.  Others pursue causes more broadly, advocating for human rights and social justice.  In the past, both might have been labeled impractical dreamers, utopian malcontents, or, worse, risks to national security.  Yet today they are respected professionals, envied even by those who have achieved great success in more traditional, profit-motivated endeavors.  That’s truly progress.

I also spend a great deal of time buried in the technical details of evaluation—designing research, developing tests and surveys, collecting data, and performing statistical analysis—so I sometimes lose sight of the spirit that animates the causes I serve.  However, it isn’t long before I’m led back to the professionals who, even after almost 20 years, continue to inspire me.  I can’t wait to spend another year working with them.

The next year promises to be more inspiring than ever, and I look forward to sharing my work, my thoughts, and the occasional laugh with all of you in the new year.

Best wishes to all.

John

1 Comment

Filed under Commentary, Evaluation, Gargani News, Program Evaluation

From Evaluation 2010 to Evaluator 911

 

The West Coast Reception hosted by San Francisco Bay Area Evaluators (SFBAE), Southern California Evaluation Association (SCEA), and Claremont Graduate University (CGU) is an AEA Conference tradition and I look forward to it all year long.  I never miss it (and as Director of SFBAE, I had better not).

But as I was leaving the hotel to head to the reception my coworker came up to me and whispered, “I am in severe pain—I need to go the hospital right now.”  Off we went to the closest emergency room where she was admitted, sedated, and subjected to a mind numbing variety of tests.  After some hours of medical mayhem she called me in to her room and said, “The doctor wants me to rest here while we wait for the test results to come back.  That could take a couple hours.  I’m comfortable and not at any risk, so why don’t you go the reception?  It’s only two blocks from here.  I’ll call you when we get the test results.”

What a trooper!

So I jogged over to the reception and found that the party was still going strong hours after it was scheduled to close down (that’s a West Coast Reception tradition).  Kari Greene, an OPEN member who may be one of the funniest people on the planet, had us all in stitches as she regaled us with stories of evaluations run amok (other people’s, of course).  Jane Davidson of Genuine Evaluation fame (pictured below) explained that drinking sangria is simple, making sangria is complicated, but making more sangria after drinking a few glasses was complex.  I am not sure what that means, but I saw a lot of heads nodding.  The graduate students in evaluation from CGU were embracing the “opportunivore” lifestyle as they filled their stomachs (and their pockets) with shrimp, empanadas, and canapés.

Then my phone rang—my coworker’s tests were clear and the situation resolved.  I left the party (still going strong) and took her back to the hotel, at which point she said, “I’m glad you made it to the reception—we can’t break the streak.  If you end up in the hospital next year we’ll bring the party to you!”

And that, in a nutshell, is the spirit of the conference—connection, community, and continuity.  Well, that and shrimp in your pockets.

Leave a comment

Filed under AEA Conference, Evaluation, Program Evaluation

The AEA Conference (So Far)

The AEA conference has been great. I have been very impressed with the presentations that I have attended so far, though I can’t claim to have seen the full breadth of what is on offer as there are roughly 700 presentations in total.  Here are a few that impressed me the most.  Continue reading

1 Comment

Filed under AEA Conference, Evaluation Quality, Program Evaluation

AEA 2010 Conference Kicks Off in San Antonio

In the opening plenary of the Evaluation 2010 conference, AEA President Leslie Cooksy invited three leaders in the field—Eleanor Chelimsky, Laura Leviton, and Michael Patton– to speak on The Tensions Among Evaluation Perspectives in the Age of Obama: Influences on Evaluation Quality, Thinking and Values.  They covered topics ranging from how government should use evaluation information to how Jon Stewart of the Daily Show outed himself as an evaluator during his Rally to Restore Sanity/Fear (“I think you know that the success or failure of a rally is judged by only two criteria; the intellectual coherence of the content, and its correlation to the engagement—I’m just kidding.  It’s color and size.  We all know it’s color and size.”)

One piece that resonated with me was Laura Leviton’s discussion of how the quality of an evaluation is related to our ability to apply its results to future programs—what is referred to as generalization.  She presented a graphic that described a possible process for generalization that seemed right to me; it’s what should happen.  But how it happens was not addressed, at least in the short time in which she spoke.  It is no small task to gather prior research and evaluation results, translate them into a small theory of improvement (a program theory), and then adapt that theory to fit specific contexts, values, and resources.  Who should be doing that work?  What are the features that might make it more effective?

Stewart Donaldson and I recently co-authored a paper on that topic that will appear in New Directions for Evaluation in 2011.  We argue that stakeholders are and should be doing this work, and we explore how the logic underlying traditional notions of external validity—considered by some to be outdated—can be built upon to create a relatively simple, collaborative process for predicting the future results of programs.  The paper is a small step toward raising the discussion of external validity (how we judge whether a program will work in the future) to the same level as the discussion of internal validity (how we judge whether a program worked in the past), while trying to avoid the rancor that has been associated with the latter.

More from the conference later.

1 Comment

Filed under AEA Conference, Evaluation Quality, Gargani News, Program Evaluation

Good versus Eval

After another blogging hiatus, the battle between good and eval continues.  Or at least my blog is coming back online as the American Evaluation Association’s Annual Conference in San Antonio (November 10-14) quickly approaches.

I remember that twenty years ago evaluation was widely considered the enemy of good because it took resources away from service delivery.  Now evaluation is widely considered an essential part of service delivery, but the debate over what constitutes a good program and a good evaluation continues.  I will be joining the fray when I make a presentation as part of a session entitled Improving Evaluation Quality by Improving Program Quality: A Theory-Based/Theory-Driven Perspective (Saturday, November 13, 10:00 AM, Session Number 742).  My presentation is entitled The Expanding Profession: Program Evaluators as Program Designers, and I will discuss how program evaluators are increasingly being called upon to help design the programs they evaluate, and why that benefits program staff, stakeholders, and evaluators.  Stewart Donaldson is my co presenter (The Relationship between Program Design and Evaluation), and our discussants are Michael Scriven, David Fetterman, and Charles Gasper.  If you know these names, you know to expect a “lively” (OK, heated) discussion.

If you are an evaluator in California, Oregon, Washington, New Mexico, Hawaii, any other place west of the Mississippi, or anywhere that is west of anything, be sure to attend the West Coast Evaluators Reception Thursday, November 11, 9:00 pm at the Zuni Grill (223 Losoya Street, San Antonio, TX 78205) co-sponsored by San Francisco Bay Area Evaluators and Claremont Graduate University.  It is a conference tradition and a great way to network with colleagues.

More from San Antonio next week!

2 Comments

Filed under Design, Evaluation Quality, Gargani News, Program Design, Program Evaluation

Quality is a Joke

If you have been following my blog (Who hasn’t?), you know that I am writing on the topic of evaluation quality, the theme of the 2010 annual conference of the American Evaluation Association taking place November 10-13. It is a serious subject. Really.

But here is a joke, though perhaps only the evaluarati (you know who you are) will find it amusing.

    A quantitative evaluator, a qualitative evaluator, and a normal person are waiting for a bus. The normal person suddenly shouts, “Watch out, the bus is out of control and heading right for us! We will surely be killed!”

    Without looking up from his newspaper, the quantitative evaluator calmly responds, “That is an awfully strong causal claim you are making. There is anecdotal evidence to suggest that buses can kill people, but the research does not bear it out. People ride buses all the time and they are rarely killed by them. The correlation between riding buses and being killed by them is very nearly zero. I defy you to produce any credible evidence that buses pose a significant danger. It would really be an extraordinary thing if we were killed by a bus. I wouldn’t worry.”

    Dismayed, the normal person starts gesticulating and shouting, “But there is a bus! A particular bus! That bus! And it is heading directly toward some particular people! Us! And I am quite certain that it will hit us, and if it hits us it will undoubtedly kill us!”

    At this point the qualitative evaluator, who was observing this exchange from a safe distance, interjects, “What exactly do you mean by bus? After all, we all construct our own understanding of that very fluid concept. For some, the bus is a mere machine, for others it is what connects them to their work, their school, the ones they love. I mean, have you ever sat down and really considered the bus-ness of it all? It is quite immense, I assure you. I hope I am not being too forward, but may I be a critical friend for just a moment? I don’t think you’ve really thought this whole bus thing out. It would be a pity to go about pushing the sort of simple linear logic that connects something as conceptually complex as a bus to an outcome as one dimensional as death.”

    Very dismayed, the normal person runs away screaming, the bus collides with the quantitative and qualitative evaluators, and it kills both instantly.

    Very, very dismayed, the normal person begins pleading with a bystander, “I told them the bus would kill them. The bus did kill them. I feel awful.”

    To which the bystander replies, “Tut tut, my good man. I am a statistician and I can tell you for a fact that with a sample size of 2 and no proper control group, how could we possibly conclude that it was the bus that did them in?”

To the extent that this is funny (I find it hilarious, but I am afraid that I may share Sir Isaac Newton’s sense of humor) it is because it plays on our stereotypes about the field. Quantitative evaluators are branded as aloof, overly logical, obsessed with causality, and too concerned with general rather than local knowledge. Qualitative evaluators, on the other hand, are suspect because they are supposedly motivated by social interaction, overly intuitive, obsessed with description, and too concerned with local knowledge. And statisticians are often looked upon as the referees in this cat-and-dog world, charged with setting up and arbitrating the rules by which evaluators in both camps must (or must not) play.

The problem with these stereotypes, like all stereotypes, is that they are inaccurate. Yet we cling to them and make judgments about evaluation quality based upon them. But what if we shift our perspective to that of the (tongue in cheek) normal person? This is not an easy thing to do if, like me, you spend most of your time inside the details of the work and the debates of the profession. Normal people want to do the right thing, feel the need to act quickly to make things right, and hope to be informed by evaluators and others who support their efforts. Sometimes normal people are responsible for programs that operate in particular local contexts, and at others they are responsible for policies that affect virtually everyone. How do we help normal people get what they want and need?

I have been arguing that we should, and when we do we have met one of my three criteria for quality—satisfaction. The key is first to acknowledge that we serve others, and then to do our best to understand their perspective. If we are weighed down by the baggage of professional stereotypes, it can prevent us from choosing well from among all the ways we can meet the needs of others. I suppose that stereotypes can be useful when they help us laugh at ourselves, but if we come to believe them, our practice can become unaccommodatingly narrow and the people we serve—normal people—will soon begin to run away (screaming) from us and the field. That is nothing to laugh at.

8 Comments

Filed under Evaluation, Evaluation Quality, Program Evaluation