Running Hot and Cold for Mixed Methods: Jargon, Jongar, and Code

Jargon is the name we give to big labels placed on little ideas. What should we call little labels placed on big ideas? Jongar, of course.

A good example of jongar in evaluation is the term mixed methods. I run hot and cold for mixed methods. I praise them in one breath and question them in the next, confusing those around me.

Why? Because mixed methods is jongar.

Recently, I received a number of comments through LinkedIn about my last post. A bewildered reader asked how I could write that almost every evaluation can claim to use a mixed-methods approach. It’s true, I believe that almost every evaluation can claim to be a mixed-methods evaluation, but I don’t believe that many—perhaps most—should.

Why? Because mixed methods is also jargon.

Confused? So were Abbas Tashakkori and John Creswell. In 2007, they put together a very nice editorial for the first issue of the Journal of Mixed Methods Research. In it, they discussed the difficulty they faced as editors who needed to define the term mixed methods. They wrote:

…we found it necessary to distinguish between mixed methods as a collection and analysis of two types of data (qualitative and quantitative) and mixed methods as the integration of two approaches to research (quantitative and qualitative).

By the first definition, mixed methods is jargon—almost every evaluation uses more than one type of data, so the definition attaches a special label to a trivial idea. This is the view that I expressed in my previous post.

By the second definition, which is closer to my own perspective, mixed methods is jongar—two simple words struggling to convey a complex concept.

My interpretation of the second definition is as follows:

A mixed-methods evaluation is one that establishes in advance a design that explicitly lays out a thoughtful, strategic integration of qualitative and quantitative methods to accomplish a critical purpose that either qualitative or quantitative methods alone could not.

Although I like this interpretation, it places a burden on the adjective mixed that it cannot support. In doing so, my interpretation trades one old problem—being able to distinguish mixed methods evaluations from other types of evaluation—for a number of new problems. Here are three of them:

  • Evaluators often amend their evaluation designs in response to unanticipated or dynamic circumstances—so what does it mean to establish a design in advance?
  • Integration is more than having quantitative and qualitative components in a study design—how much more and in what ways?
  • A mixed-methods design should be introduced when it provides a benefit that would not be realized otherwise—how do we establish the counterfactual?

These complex ideas are lurking behind simple words. That’s why the words are jongar and why the ideas they represent may be ignored.

Technical terms—especially jargon and jongar—can also be code. Code is the use of technical terms in real-world settings to convey a subtle, non-technical message, especially a controversial message.

For example, I have found that in practice funders and clients often propose mixed methods evaluations to signal—in code—that they seek an ideological compromise between qualitative and quantitative perspectives. This is common when program insiders put greater faith in qualitative methods and outsiders put greater faith in quantitative methods.

When this is the case, I believe that mixed methods provide an illusory compromise between imagined perspectives.

The compromise is illusory because mixed methods are not a middle ground between qualitative and quantitative methods, but a new method that emerges from the integration of the two. At least by the second definition of mixed methods that I prefer.

The perspectives are imagined because they concern how results based on particular methods may be incorrectly perceived or improperly used by others in the future. Rather than leap to a mixed-methods design, evaluators should discuss these imagined concerns with stakeholders in advance to determine how to best accommodate them—with or without mixed methods. In many funder-grantee-evaluator relationships, however, this sort of open dialogue may not be possible.

This is why I run hot and cold for mixed methods. I value them. I use them. Yet, I remain wary of labeling my work as such because the label can be…

  • jargon, in which case it communicates nothing;
  • jongar, in which case it cannot communicate enough; or
  • code, in which case it attempts to communicate through subtlety what should be communicated through open dialogue.

Too bad—the ideas underlying mixed methods are incredibly useful.

6 Comments

Filed under Commentary, Evaluation, Evaluation Quality, Program Evaluation, Research

6 responses to “Running Hot and Cold for Mixed Methods: Jargon, Jongar, and Code

  1. getrealevaluation

    John, for something to be called a genuine *mixed methods* evaluation or piece of research, I think there needs to be *synthesis* of the qualitative and quantitative data, drawing on both of them *together* to make sense of the same evaluation (or research) question.

    I think a lot of what people call “mixed methods” is what I call “both methods” – you’ve got some qual, you’ve got some quant, but they are analysed separately, considered separately, and not synthesized or made sense of together to truly answer the questions.

    I had a rant about this a few years ago (2007) – ordering the results section separately by type of data – see the JMDE editorial on “Unlearning some of our social scientist habits” http://tiny.cc/unlearning

    Interested in your (and others’) thoughts on this though!

    • Jane,

      Thanks for your comment and sharing the link to your rant. Rants are always welcome here.

      I think what you call synthesis I call integration–a design in which data, methods, and questions fit together well. For that to happen, there needs to be advance planning. Lucky accidents of integration are rare.

      I suppose there can be benefits to “both methods”, “mixed-up methods”, “parallel methods”, or whatever we choose to call less coordinated applications of qual and quant methods in a single evaluation. It depends on the particulars. However, I agree that these do not rise to what *should* be called mixed methods.

      But they are called mixed methods by some. That causes confusion, which is why I hesitate to use the term, even when I believe my work meets my stricter definition.

  2. getrealevaluation

    Yes, John, integration or synthesis, I think we are talking the same thing.

    However, I am not 100% convinced about the pre-planned requirement in order to call it the genuine article – though I agree it helps immensely!

    Say, for example, someone did a “both methods” evaluation and sent an early draft of their analysis and thinking to you or me for critique. It’s quite common to see evaluations where all the right ingredients are there (or, close enough to all), but that systematic integration, considering the different pieces as a set, just hasn’t been done.

    So, what if we sent this feedback and the evaluation team said OMGoodness, you are absolutely right! They do a serious revamp of the report, which now integrates/synthesizes the different kinds of evidence … if it’s done well, there’s no reason why we can’t call that a genuine mixed methods study. It’s not accidental integration; it’s just done a little late …

    To me what matters is the quality of the final product (i.e. how well synthesized the evidence is), rather than whether the intent or preplanning was there originally. [No argument that it’s a lot harder to have the right evidence in hand if you don’t plan for it though!]

    Cheers,
    Jane Davidson
    (sorry, name doesn’t show on the previous comment!)

    • In concept, I agree. In practice, my sense is that the situation you describe is quite rare. More common, I believe, are cases where evaluators intended to integrate qual and quant methods but fell short of the mark. There might be a chance of strengthening these evaluations as you describe.

  3. I love the article – very thought provoking – and the discussion. I tend to agree with Dr. Davidson that there doesn’t necessarily need to be a plan in advance in order to allow for integration, but I also agree that true integration is rare in evaluation. (Being an external consultant these days, I now understand that this partly has to do with the client’s willingness to budget time to allow the evaluator to actually think, rather than just quickly dumping data onto paper as a final report.) In my mind, the key to mixed methods is the *intentional* use of quant and qual data together, to deepen our understanding around an evaluation question. It’s the lack of intentionality that I often find to be the problem – whether this comes on the front end as an advance plan, or on the back end as a synthesis of existing information, the final product is bound to be better where such intentionality is present. I also think that this allows room for shifting and changing the plan as the evaluation progresses, but without losing the overall view/plan of what we’re trying to learn and how each method should be contributing to that.

    • Kelci,

      Thanks for your comment. I agree that intentionality is critical, and perhaps a more apt way to characterize my point. But what one considers a research design may depend on the approach to research one imagines. At the risk of oversimplifying, consider two approaches to research. For the first, an evaluator gets in the middle of a messy situation, collects as much data as possible, and over time is able to capture insights as they emerge. For the second, the evaluator focuses on part of a messy situation, collects as little data as possible, and by the end provides credible answers to pre-specified questions. I think that both approaches are very unlikely to produce a proper mixed-methods evaluation without baking the integration of qualitative and quantitative methods into the design. In the first case, however, the design may look more like intention—creating opportunities for a mixed-methods approach to emerge—than a step-by-step protocol. What I have not seen—perhaps others have—is an evaluation that did *not* explicitly plan for a mixed-methods approach in advance yet somehow ended up with a thoughtful, strategic integration of qual and quant data/methods. Designs may change to accommodate circumstances, and I consider these changes, by and large, to be “in advance.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s