As I was reading a number of evaluation reports recently, the oddity of evaluation jargon struck me. It isn’t that we have unusual technical terms—all fields do—but that we use everyday words in unusual ways. It is as if we speak in a code that only another evaluator can decipher.
I jotted down five words and phrases that we all use when we speak and write about evaluation. On the surface, their meanings seem perfectly clear. However, they can be used for good and bad. How are you using them?
(1) Suggest
As in: The data suggest that the program was effective.
Pros: Suggest is often used to avoid words such as prove and demonstrate—a softening of this is so to this seems likely. Appropriate qualification of evaluation results is desirable.
Cons: Suggest is sometimes used to inflate weak evidence. Any evaluation—strong or weak—can be said to suggest something about the effectiveness of a program. Claiming that weak evidence suggests a conclusion overstates the case.
Of special note: Data, evaluations, findings, and the like cannot suggest anything. Authors suggest, and they are responsible for their claims.
(2) Mixed Methods
As in: Our client requested a mixed-methods evaluation.
Pros: Those who focus on mixed methods have developed thoughtful ways of integrating qualitative and quantitative methods. Thoughtful is desirable.
Cons: All evaluations use some combination of qualitative and quantitative methods, so any evaluation can claim to use—thoughtfully or not—a mixed-methods approach. A request for a mixed-methods evaluation can mean that clients are seeking an elusive middle ground—a place where qualitative methods tell the program’s story in a way that insiders find convincing and quantitative methods tell the program’s story in a way that outsiders find convincing. The middle ground frequently does not exist.
(3) Know
As in: We know from the literature that teachers are the most important school-time factor influencing student achievement.
Pros: None.
Cons: The word know implies that claims to the contrary are unfounded. This shuts down discussion on topics for which there is almost always some debate. One could argue that the weight of evidence is overwhelming, the consensus in the field is X, or we hold this belief as a given. Claiming that we know, with rare exception, overstates the case.
(4) Nonetheless [we can believe the results]
As in: The evaluation has flaws, nonetheless it reaches important conclusions.
Pros: If the phrase is followed by a rationale (…because of the following reasons…), this turn of phrase might indicate something quite important.
Cons: All evaluations have flaws, and it is the duty of evaluators to bring them to the attention of readers. If the reader is then asked to ignore the flaws, without being given a reason, it is at best confusing and at worst misleading.
(5) Validated Measure
As in: We used the XYZ assessment, a previously validated measure.
Pros: None
Cons: Validity is not a characteristic of a measure. A measure is valid for a particular group of people for a particular purpose in a particular context at a specific point in time. This means that evaluators must make the case that all of the measures that they used were appropriate in the context of the evaluation.
The Bottom Line
I am guilty of sometimes using bad language. We all are. But language matters, even in causal conversations among knowledgeable peers. Bad language leads to bad thinking, as my mother always said. So I will endeavor to watch my language and make her proud. I hope you will too.
All great points. I too am guilty of this.
A particular bugbear of mine is overlapping evaluation terms and labels. Check out my first and ongoing attempt at a typology of evaluation terms:
https://bubbl.us/?h=b9bc4/1a7bc4/769hviFngrfnI
Reforming the jargon of evaluation is no small task. More power to you!
I’d love to see one of your bubble charts with “data-driven decision-making” in the middle…. Another family of terms that is so similar yet so confusing.
Hmmm…good idea for a future post.
So glad to see some point out a major pet peeve of mine regarding the use of the term “validated measure”. Even people who know better fall into that trap. All great points. Kudos for your always thought provoking evaluation blog.
Emily, Thank you for your kind words.
I feel your pain. Popular concepts of what validated means can cause big problems. Frequently, a particular “validated” measure is required in an evaluation, yet the measure is not well aligned with the program. Not every attempt to increase engagement, satisfaction, knowledge, health, etc. focuses on the same particulars of similarly defined concepts. In addition, because validation is a mysterious process, it can be difficult to convince users that a newly developed, well-aligned measure can be trusted for evaluative purposes–even if there is evidence that it should.
Evaluators shouldn’t say “logic model” with newbies. I use “timeline” instead and say we’re building a “graphic organizer” or a “picture” to show how what we’re doing (activities) might affect things in the future (short-term, medium-term, and long-term outcomes).
Ann,
Thanks for your comment. I have also found that the term *logic model* can cause confusion, even when those I am working with have a high level of evaluation training. There are many approaches to logic models (I call them brands), each with its own linguistic challenges–it is easy for people to be (unknowingly) divided by a common vocabulary. Rather than force people to express their ideas in terms in a cookie-cutter fashion–inputs, outputs, outcomes, objectives, etc.–I don’t use any technical language. I ask questions, listen, and doodle. At some point I show my doodle and ask, “This is what I hear you saying–do I understand you correctly?” After a few revisions, I can build out the doodle and explain the more elaborate framework in terms of an example people understand–their own program.
Great job! We need more evaluators like you.
Pingback: What does capacity building look like on a day-to-day basis? « Adventures of an Internal Evaluator
Pingback: Ann Emery on 10 Things You Didn’t Learn in School · AEA365
Pingback: Sheila B. Robinson on Theory of Change vs Logic Model – A LinkedIn Discussion · AEA365