As I was reading a number of evaluation reports recently, the oddity of evaluation jargon struck me. It isn’t that we have unusual technical terms—all fields do—but that we use everyday words in unusual ways. It is as if we speak in a code that only another evaluator can decipher.
I jotted down five words and phrases that we all use when we speak and write about evaluation. On the surface, their meanings seem perfectly clear. However, they can be used for good and bad. How are you using them?
As in: The data suggest that the program was effective.
Pros: Suggest is often used to avoid words such as prove and demonstrate—a softening of this is so to this seems likely. Appropriate qualification of evaluation results is desirable.
Cons: Suggest is sometimes used to inflate weak evidence. Any evaluation—strong or weak—can be said to suggest something about the effectiveness of a program. Claiming that weak evidence suggests a conclusion overstates the case.
Of special note: Data, evaluations, findings, and the like cannot suggest anything. Authors suggest, and they are responsible for their claims.
(2) Mixed Methods
As in: Our client requested a mixed-methods evaluation.
Pros: Those who focus on mixed methods have developed thoughtful ways of integrating qualitative and quantitative methods. Thoughtful is desirable.
Cons: All evaluations use some combination of qualitative and quantitative methods, so any evaluation can claim to use—thoughtfully or not—a mixed-methods approach. A request for a mixed-methods evaluation can mean that clients are seeking an elusive middle ground—a place where qualitative methods tell the program’s story in a way that insiders find convincing and quantitative methods tell the program’s story in a way that outsiders find convincing. The middle ground frequently does not exist.
As in: We know from the literature that teachers are the most important school-time factor influencing student achievement.
Cons: The word know implies that claims to the contrary are unfounded. This shuts down discussion on topics for which there is almost always some debate. One could argue that the weight of evidence is overwhelming, the consensus in the field is X, or we hold this belief as a given. Claiming that we know, with rare exception, overstates the case.
(4) Nonetheless [we can believe the results]
As in: The evaluation has flaws, nonetheless it reaches important conclusions.
Pros: If the phrase is followed by a rationale (…because of the following reasons…), this turn of phrase might indicate something quite important.
Cons: All evaluations have flaws, and it is the duty of evaluators to bring them to the attention of readers. If the reader is then asked to ignore the flaws, without being given a reason, it is at best confusing and at worst misleading.
(5) Validated Measure
As in: We used the XYZ assessment, a previously validated measure.
Cons: Validity is not a characteristic of a measure. A measure is valid for a particular group of people for a particular purpose in a particular context at a specific point in time. This means that evaluators must make the case that all of the measures that they used were appropriate in the context of the evaluation.
The Bottom Line
I am guilty of sometimes using bad language. We all are. But language matters, even in causal conversations among knowledgeable peers. Bad language leads to bad thinking, as my mother always said. So I will endeavor to watch my language and make her proud. I hope you will too.
Running Hot and Cold for Mixed Methods: Jargon, Jongar, and Code
Jargon is the name we give to big labels placed on little ideas. What should we call little labels placed on big ideas? Jongar, of course.
A good example of jongar in evaluation is the term mixed methods. I run hot and cold for mixed methods. I praise them in one breath and question them in the next, confusing those around me.
Why? Because mixed methods is jongar.
Recently, I received a number of comments through LinkedIn about my last post. A bewildered reader asked how I could write that almost every evaluation can claim to use a mixed-methods approach. It’s true, I believe that almost every evaluation can claim to be a mixed-methods evaluation, but I don’t believe that many—perhaps most—should.
Why? Because mixed methods is also jargon.
Confused? So were Abbas Tashakkori and John Creswell. In 2007, they put together a very nice editorial for the first issue of the Journal of Mixed Methods Research. In it, they discussed the difficulty they faced as editors who needed to define the term mixed methods. They wrote:
By the first definition, mixed methods is jargon—almost every evaluation uses more than one type of data, so the definition attaches a special label to a trivial idea. This is the view that I expressed in my previous post.
By the second definition, which is closer to my own perspective, mixed methods is jongar—two simple words struggling to convey a complex concept.
My interpretation of the second definition is as follows:
A mixed-methods evaluation is one that establishes in advance a design that explicitly lays out a thoughtful, strategic integration of qualitative and quantitative methods to accomplish a critical purpose that either qualitative or quantitative methods alone could not.
Although I like this interpretation, it places a burden on the adjective mixed that it cannot support. In doing so, my interpretation trades one old problem—being able to distinguish mixed methods evaluations from other types of evaluation—for a number of new problems. Here are three of them:
These complex ideas are lurking behind simple words. That’s why the words are jongar and why the ideas they represent may be ignored.
Technical terms—especially jargon and jongar—can also be code. Code is the use of technical terms in real-world settings to convey a subtle, non-technical message, especially a controversial message.
For example, I have found that in practice funders and clients often propose mixed methods evaluations to signal—in code—that they seek an ideological compromise between qualitative and quantitative perspectives. This is common when program insiders put greater faith in qualitative methods and outsiders put greater faith in quantitative methods.
When this is the case, I believe that mixed methods provide an illusory compromise between imagined perspectives.
The compromise is illusory because mixed methods are not a middle ground between qualitative and quantitative methods, but a new method that emerges from the integration of the two. At least by the second definition of mixed methods that I prefer.
The perspectives are imagined because they concern how results based on particular methods may be incorrectly perceived or improperly used by others in the future. Rather than leap to a mixed-methods design, evaluators should discuss these imagined concerns with stakeholders in advance to determine how to best accommodate them—with or without mixed methods. In many funder-grantee-evaluator relationships, however, this sort of open dialogue may not be possible.
This is why I run hot and cold for mixed methods. I value them. I use them. Yet, I remain wary of labeling my work as such because the label can be…
Too bad—the ideas underlying mixed methods are incredibly useful.
Filed under Commentary, Evaluation, Evaluation Quality, Program Evaluation, Research
Tagged as abbas tashakkori, code, complexity, evaluation, evaluations, jargon, john creswell, jongar, mixed methods, qualitative and quantitative methods, research, simpicity