Before January comes to a close, I thought I would make a few predictions. Ten to be exact. That’s what blogs do in the new year, after all.
Rather than make predictions about what will happen this year—in which case I would surely be caught out—I make predictions about what will happen over the next ten years. It’s safer that way, and more fun as I can set my imagination free.
My predictions are not based on my ideal future. I believe that some of my predictions, if they came to pass, would present serious challenges to the field (and to me). Rather, I take trends that I have noticed and push them out to their logical—perhaps extreme—conclusions.
In the next ten years…
(1) Most evaluations will be internal.
The growth of internal evaluation, especially in corporations adopting environmental and social missions, will continue. Eventually, internal evaluation will overshadow external evaluation. The job responsibilities of internal evaluators will expand and routinely include organizational development, strategic planning, and program design. Advances in online data collection and real-time reporting will increase the transparency of internal evaluation, reducing the utility of external consultants.
(2) Evaluation reports will become obsolete.
After-the-fact reports will disappear entirely. Results will be generated and shared automatically—in real time—with links to the raw data and documentation explaining methods, samples, and other technical matters. A new class of predictive reports, preports, will emerge. Preports will suggest specific adjustments to program operations that anticipate demographic shifts, economic shocks, and social trends.
(3) Evaluations will abandon data collection in favor of data mining.
Tremendous amounts of data are being collected in our day-to-day lives and stored digitally. It will become routine for evaluators to access and integrate these data. Standards will be established specifying the type, format, security, and quality of “core data” that are routinely collected from existing sources. As in medicine, core data will represent most of the outcome and process measures that are used in evaluations.
(4) A national registry of evaluations will be created.
Evaluators will begin to record their studies in a central, open-access registry as a requirement of funding. The registry will document research questions, methods, contextual factors, and intended purposes prior to the start of an evaluation. Results will be entered or linked at the end of the evaluation. The stated purpose of the database will be to improve evaluation synthesis, meta-analysis, meta-evaluation, policy planning, and local program design. It will be the subject of prolonged debate.
(5) Evaluations will be conducted in more open ways.
Evaluations will no longer be conducted in silos. Evaluations will be public activities that are discussed and debated before, during, and after they are conducted. Social media, wikis, and websites will be re-imagined as virtual evaluation research centers in which like-minded stakeholders collaborate informally across organizations, geographies, and socioeconomic strata.
(6) The RFP will RIP.
The purpose of an RFP is to help someone choose the best service at the lowest price. RFPs will no longer serve this purpose well because most evaluations will be internal (see 1 above), information about how evaluators conduct their work will be widely available (see 5 above), and relevant data will be immediately accessible (see 3 above). Internal evaluators will simply drop their data—quantitative and qualitative—into competing analysis and reporting apps, and then choose the ones that best meet their needs.
(7) Evaluation theories (plural) will disappear.
Over the past 20 years, there has been a proliferation of theories intended to guide evaluation practice. Over the next ten years, there will be a convergence of theories until one comprehensive, contingent, context-sensitive theory emerges. All evaluators—quantitative and qualitative; process-oriented and outcome-oriented; empowerment and traditional—will be able to use the theory in ways that guide and improve their practice.
(8) The demand for evaluators will continue to grow.
The demand for evaluators has been growing steadily over the past 20 to 30 years. Over the next ten years, the demand will not level off due to the growth of internal evaluation (see 1 above) and the availability of data (see 3 above).
(9) The number of training programs in evaluation will increase.
There is a shortage of evaluation training programs in colleges and universities. The shortage is driven largely by how colleges and universities are organized around disciplines. Evaluation is typically found as a specialty within many disciplines in the same institution. That disciplinary structure will soften and the number of evaluation-specific centers and training programs in academia will grow.
(10) The term evaluation will go out of favor.
The term evaluation sets the process of understanding a program apart from the process of managing a program. Good evaluators have always worked to improve understanding and management. When they do, they have sometimes been criticized for doing more than determining the merit of a program. To more accurately describe what good evaluators do, evaluation will become known by a new name, such as social impact management.
…all we have to do now is wait ten years and see if I am right.
41 responses to “The Future of Evaluation: 10 Predictions”
I can not censor my comments or other’s comments! I like to say these predictions are Very nice and reasonable thinking.
Moein — I am always happy to hear from you. Thanks for the comment.
A question for you:
What predictions would you make for evaluation in your country?
1, 9 & 10.
Moein–If the name *evaluation* goes out of favor, what will replace it?
Currently in Iran evaluation have not a favor! And I think in future evaluation may be a part of management process. In sum I think the next generations of evaluation evolve in capacity building discourse.
Generally, I’d bet on your predictions in descending order. As an evaluator who moved from external to internal evaluation about 7 years ago, I think #1 is a pretty sure bet. I’ve seen my own responsibilities shift dramatically in the past years from evaluation to performance management systems and quality improvement. Likewise, the importance of continuous improvements based on ongoing evaluation findings has long been the earmark of the “best” evaluation partnerships. Regaring #3, I work in public health and we have long relied on ongoing data collection systems–BRFSS, Healthy Youth Survey, disease surveillance systems, immunization records, vital statistics, etc., etc.
I would bet the same way. Without intending to, it seems that I more or less put the predictions in descending order of what I believe is likely.
Another question is whether society would be better off if any of the predictions came true.
For example, I agree that public health and medicine have been at the front of common data definition/collection efforts for some time. That has helped policymakers coordinate public health efforts, researchers interpret findings, and healthcare professionals design programs. It may also be limiting our imagination of what is possible or desirable, and it may privilege those sectors of society that provide more and better data.
I believe the predictions capture where the field is going. I wonder if we will be ready when we get there.
John, that definitely is the crucial question. I remember reviewing AEA’s guidance to the feds regarding internalizing evaluation at the national level. I was a bit alarmed at the thought of evaluation being enlisted to work within a system that is driven by political tides as much as rational processes. Also, as funding resources for the Behavioral Risk Factor Surveillance Survey have decreased the costs per completed survey interview have increased dramatically. This results in a smaller sample and at the local level we were already struggling to have enough data to say anything about our American Indian and Latino populations. We will need to have loud voices and commitment to assure that there’s enough data to mine, especially when we want to look at equity issues. Is it time for the canary to sound the alarm?
Your metaphor may be a bit too apt as canaries in mines don’t sound an alarm so much as drop dead, which sets the miners into immediate action. As you point out, we don’t want to wait for some group to be negatively impacted by data policies before we take action. So the big question is this — How do we focus attention on a policy that at the moment is not hurting anyone but at some point in the future will? I wish I knew.
I see advocates and their organizations, such as Angela Glover Blackwell and her staff at Policy Link at the national level and Rosalinda Guillen and her staff at Comunidad y Comunidad at our county level who are raising these concerns and artfully moving forward the equity movement. And where data mining is not possible they are advocating for data collection systems that include those on behalf of whom they advocate.
I tried sharing some of your predictions with a couple of university professor types. Oops. All I got in response was a rather superior sounding comment about, “Oh, I don’t know… external evaluators will still be necessary because of a… oh what is that… a little thing called being ‘objective.'” Sigh. I chose to leave the vicinity rather than try to get into a debate about it.
Long story short, I find your predictions thought provoking. And I have the patience to see how well your crystal ball blog entry holds up over the next decade! Thanks as always for your fine thinking.
Not surprising. But keep in mind I wasn’t predicting the end of external evaluation. Just that it will be less important.
Most other fields depend on internally generated information. For example, independent financial audits of corporations only check a small fraction of accounts. Why should social betterment programs require greater scrutiny?
Objectivity is important. Honesty more so. Transparency promotes honesty, imperfectly, but possibly enough that honest insiders may eventually be valued over objective outsiders.
I appreciate the notion of accountability by and to the team as much as accountability to funders. That’s why your predictions resonated so much. It also reminds me a bit of the old phrase “The Wilford Brimly Law– ‘cuz it’s the right thing to do.” Which connects with the importance of doing the right things and not just doing things right.
I look forward to sharing this list with others.
Pingback: Susan Kistler on The Future of Evaluation: 5 Predictions (building on 10 others!) · AEA365
Pingback: The Future of Evaluation: Part 3 (Two more predictions) « EvaluationBaron
Imagine that these 10 had already come true. What would be your predictions for the next 10 years?
You asked for it, you got it. Look for my 20 year predictions in a new EvalBlog entry in about one week.
Your predictions are very interesting indeed and i think that many of these are already a reality in the health and social development sector especially in poor resource settings. No 1 is becoming more the norm in South Africa where i work. Internal evaluators (M&E proffessionals) are leading efforts to strengthen program design and Org strategic planning processes using internal evaluation findings as well as data mining (No.3). External evaluations commissioned by donors also draw considerably on existing program data- thereby increasing the importance of M&E managers’ role of ensuring quality and use of routine program data gathered by the organization. Your 8th and 9th predictions are a reality in our context as well. There are very few opportunities for training in line with the growing demand. We hope this will change gradually as Universities adapt to address these needs.
John, I think these are reasonable predictions, with one exception. Although data mining could certainly grow in importance in evaluation, I don’t see data collection disappearing. The problem I have always experienced with data that are not collected with a specific research/evaluation questions in mind is that, most often, they don’t answer the questions very well! In addition, I wonder about the design implications. Where in data mining are the potential conterfactuals?
This is a response I gave to roughly the same question on an AEA LinkedIn Group discussion. I think you can link to it here (http://tinyurl.com/7rplt3s).
I have similar concerns about data mining. However, electronic data are becoming more widely available and more comprehensive in scope. Evaluators are rightly making greater efforts to take advantage of this growing pool of data. For better or for worse, I believe their efforts will grow until data mining overshadows the customized, research-like data collection efforts that we currently favor in evaluation.
Data mining can be rigorous in the way that experimentalists use the word. Data mining techniques can be used to conduct sophisticated interrupted time series analyses, which are widely accepted quasi-experimental alternatives to classic randomized control trials.
Data mining techniques can also be used to provide rich descriptions of humans and their behavior. In contrast to datasets from most randomized control trials, evaluators can find available electronic datasets that are larger by many magnitudes of ten, allowing for more nuanced understandings of subgroups, contingencies, and contexts.
As you point out, one danger is actively believing, or just tacitly assuming, that the natural circumstances that give rise to available data generally provide a sound basis for causal inferences. This is something to worry about.
But the danger may (and I emphasize *may*) seem larger than it is.
Traditionally researchers develop a causal hypothesis from theory and/or data about a program, create a special set of (experimental) circumstances under which the hypothesis is tested, and if the results are favorable suggest that others in similar (non-experimental) circumstances use the program.
We now have the capacity to develop a causal hypothesis exclusively from data collected in the course of some online activity, modify the online activity quickly in accordance with the hypothesis, see what happens, then revert to the prior online activity, and again see what happens. This scenario looks a lot like N-of-1 studies used to good effect in medicine.
Studies such as this depend on the ease and speed of manipulating the design of a program. As programs incorporate more online activities, ease and speed will likely increase.
Who knows what will happen in the future. As with all of my predictions, I remain a hopeful skeptic. But I absolutely have hope.
John, thanks for the predictions. Can you talk a little bit more about the trends you’re seeing that suggest greater shifts toward internal evaluations? It’s happening in my organization and the reasons include greater opportunities for internal learning, more frequent feedback & sustainability. Would love to hear your thoughts and some of background/details. Any links you can share would also be helpful. Cheers.
I discuss this a bit and some other changes I am seeing in evaluation practice in a paper that will appear in Evaluation and Program Planning sometime soon.
In short, the variety of players in the “social benefit sector” is growing. There are many more corporations, microfoundations, megafoundations, and social entrepreneurs focusing on (or at least talking about) social and environmental impacts than there were 10 or even 5 years ago. My sense is that these new players tend to include internal evaluators early in their development.
Interestingly, internal evaluators in these new organizations frequently do not have explicit evaluation training (coming instead from law, design, tech, communication, and business) and may not even call themselves evaluators (using instead titles like Chief Impact Officer or Knowledge and Learning Associate).
They often come to internal evaluation early in their careers, something I find a cause for celebration (What’s not to like about new ideas, current training, and optimism?) and a cause for worry (Will they stay in a field in which measurable progress has historically been slow? How much of dent should we expect newcomers to make in problems that are as ancient as humankind?).
From what I see, traditional players–nonprofit organizations in particular–are hiring more internal evaluators for two reasons. First, there is a strategic advantage to evaluation (something that I strive to provide to clients). Get it right, and your programs become more effective. With evidence of that, it becomes easier to find funding and do more good for more people.
Second, there is a tactical advantage to communicating publicly that you take evaluation seriously (even if you don’t). In cynical moments, I feel as though the second reason dominates the first. But then I talk to some internal evaluators and my cynicism fades. I have found internal evaluators to be a good bunch, and I think having more of them will benefit everyone.
Thanks for the preview!…will look for your paper when published. Agree that It seems like a good thing to have folks coming to internal evaluation with different backgrounds, ideas and experiences. Kuhn has a good line about new insights and changes often coming from people who are new to an area or field, partly b/c they’re not wedded to prior practices.
Am curous whether your take is that foundations are also increasingly on board with a shift to internal evaluation. I know some are becoming more accepting of different kinds of evidence, but is it a trend? I still get questions like, can internal evaluators be objective and can their data and/or conclusions be trusted?
I very much like your predictions and I would suggest that you come to present your ideas to the next conference of the European Evaluation Society that will take place in Helsinki from 3-5 October 2012: “Evaluation in the Networked Society: new concepts, new challenges, new solutions” . This will trigger a lot of interesting discussions among participants, among whom many are reflecting on the future of evaluation. Please visit our Website http://www.europeanevaluation.org .
Thank you for your kind words. Helsinki…interesting. I have wanted to attend the EES Conference for some time. I will give it serious thought.
Nice job John. Thoughtful and considered as usual. I agree with most of the discussion above and your predictions are very much on target. (I like the future work picture as well.)
As you know, I am a major advocate of internal evaluation through empowerment evaluation (and I served as an internal auditor as well). However, as the profession shifts in that direction additional quality controls should be contemplated to avoid organizational conflicts of interest. In other words, the evaluation team should report high enough in management that it avoids reporting to the group they are evaluating (if they are an independent unit). If they adopt more of an empowerment evaluation mode then it is the group evaluating themselves, reducing much of the traditional organizational conflict of interest problems.
I think reports will remain but take on new forms, ranging from brief videos (like the Quicktime one’s I make for my clients) to taped videoconference exchanges (like I do on ooVoo or Skype). There may also be a need for audit trail summary documents (more conventional reports) for some time (decades) – since socialization runs deep for everyone and expectations however archaic remain long after they are useful.
My guess is the term evaluation will remain (if it continues to evolve and accept more responsibilities and meet rising expectations).
You take care and thanks as always for providing thought provoking (and I think accurate) predictions about our profession.
Dr. David Fetterman
Fetterman & Associates
Empowerment Evaluation offers an interesting lens for considering internal evaluation. Skeptics of internal evaluation, I believe, fear that organizations use it to conduct Empower-Me-To-Control-My-Message Evaluation. I like your suggestion that Empowerment Evaluation might reduce skepticism by resolving some conflicts of interest. I need to think about it some more. Interesting.
Reports, however, are doomed. The real value of a report is determined by the amount of usable information it contains. The same information presented in reports is now being presented faster and more comprehensively in other ways online. In ten years, the amount of information we will be able to access — without the filter and delay of a formal report — will astound us. However, I am skeptical that more and faster information will improve our collective efforts to benefit others. That is another story.
I hope the term evaluation will remain, but it is already falling out of favor. I see two important reasons for this: (1) amazingly, most people have never heard of evaluation, and (2) the new players in the social benefit space want to establish that they are approaching their work differently so they are choosing new words to describe their efforts.
John, just wanted to make sure that you didn’t miss this response from colleagues in Slovenia: “Six predictions about the future of evaluation”
Thanks Susan. Saw it — glad that others are joining in the fun.
Thank you, John, for your 10 predictions. My attention was drawn to them by our Slovenia Evaluation Association colleague, Bojan who sent me copy of SEA’s 6 Point predictions.
If much of the 10 or 6 Point Predictions is to become reality in the next 10 years, and they should be, if World Leaders are serious at finding; implementing; monitoring, evaluating and assessing the implementation of Sound Professional Solutions to real and complex World Food, Fuel, Finance, Trade, Terrorism and Climate Change problems on the ground from Village to Global levels on International Institutions, Developed Countries and Developing Countries sides; then Dr. Hellmut Eggers, who created Project Cycle Management, (PCM) in 1987, observed constraint / drawback – “there is no accumulation of Development Evaluation (Cooperation) Learning in the past 25 years of operating PCM”, which is the most widely used (in the breech) Evaluation Approach in our World today, need to be Professionally TACKLED by all concerned Evaluation and non Evaluation Professionals as well as Policy / Decision Makers from Village to Global levels.
Thus, if in the next 25 years of CORRECTLY operating PCM through 3PCM (Policy, Program, Project Cycle Management) ENSURING “Development Evaluation (Cooperation) Lesson Learning” is generally being followed, at equal speed, by “Development Evaluation (Cooperation) Lesson Forgetting” within International Institutions, Developed Countries Governments and Developing Countries Governments, the probability is HIGH that the 6 / 10 Point Predictions can become reality in 5 years or less – thus making Dream of World without Poverty Reality or Achievable by 2030, that is our World will be a much better place by 2037, for Citizens of both Rich and Poor Countries.
The point we are making is that currently there is the absence of the Bridge between “Learning” and “Doing”. We should like to propose a way out of this dilemma, allowing the accumulation of “Development Evaluation (Cooperation) Lesson Learning” and the operational application of such accumulated Lesson Learning in the work towards implementing the ideas set out in World Bank Public Sector Management, WBPSM and World Bank Governance and Anti Corruption, WBGAC Documents and in ways that help achieve increasing convergence between the International Institution / Developed Country Government / Developing Country Government: Vision Intention and Reality.
Prior to the elaboration of our proposal, we should like to know that any genuinely interested International Institution / Government Entity; ACTIVE in National / International Development Cooperation; will give this proposal serious consideration. Please, let us know that the International Institution / Government Entity will do so, and we will send them our ideas – set out in Bridge Building Paper and Standard Assessment Framework Paper that the International Institution / Government Entity will be free to reject or accept, as it see fit. We just want to make sure, before setting to work, that the International Institution / Government Entity will have a look at them. Should your Institution / Entity be interested in receiving the Papers, please send email to firstname.lastname@example.org
Global Center for Learning in Evaluation and Results
International Society for Poverty Elimination /
Economic Alliance Group
Secretariat to 3PCM Community of Practice
Abuja Nigeria; Kent UK
Pingback: IDO’nun Gelecegi ile İlgili 10 Tahmin « degerturk
Pingback: About The Future of Evaluation | The Future of Evaluation
It looks like I am coming to the party late but I am interested that no one has commented on your prediction #4 (a national registry of evaluations). Having created a poster for the 2004 AEA conference as a grad student from the University of Minnesota on this topic, I am interested in finding out what, if anything is happening on this front and who might be interested in pursuing the topic and perhaps presenting at AEA in 2013.
Partners in Evaluation,
Never too late to join in. I am not aware of any efforts currently underway to establish a registry, but it is possible that someone is working on it. I think it’s an important topic and one I would love to discuss further. An AEA session might be a good way to start a larger conversation.
How encouraging. I will plan on submitting a proposal for a think tank on the subject. If any of your readers have ideas on the subject I would love to hear them.
Keep me in the loop. Would be happy to participate if my conf schedule allows.
Pingback: Dying or Thriving: The Future Of Evaluation | On Top Of The Box Evaluation
Very interesting. Let me add further.
Join country Led evaluation
The future evaluation will very much country led join evaluation where
ownership of the findings will share with the country of the program.
Most request reporting with evidence of pictures and success stories
Also some will go for online reporting too.
M&E Expert ,Project Management consultant
Trainer & Facilitator.
Isha, These are trends that I agree are likely to continue. An interesting question is what may be driving them. The first, I would suggest, is a reaction to feeling that evaluations–and the values they promote–are being imposed by those far from the local contexts in which programs are implemented. It sets out to address the imbalance between values and power. The second, it seems to me, is a reaction to methods that focus narrowly on discrete indicators rather than holistic assessments. It sets out to address the imbalance between what stakeholders see and what evaluators measure. Greater local autonomy and fuller understanding are where evaluation is going. Getting there, however, may be a bit of a bumpy ride because we are still learning how to accomplish these ends while also promoting others that we believe are important–program improvement, credible evidence, program development, justice, management, fiduciary responsibility–and may not fit neatly into any single approach.
Those are good points, but I see some others that should also be significant. When it comes to evaluation of efforts to address social issues like chronic disease reduction, improving graduation rates or addressing environmental issues, the concept of Collective Impact will lead to a big shift away from evaluating the “isolated impact” of a program in favor of how a coalition is working together to address complex issues. This should lead to the shift from logic models to collaborative strategy maps that are much better to create alignment and teamwork. The concepts of Developmental Evaluation should also gain traction as people (especially those doing internal evaluation) realize that learning and improving along the journey is the priority reason for doing evaluation. Strategy Management, which is forward looking and taps into the collective thinking of people should become as important as the data mining and predictive work.
I think you are getting at a question that policy, programs, and evaluation have faced since they began–does an intentionally coordinated “bundle” of interventions create greater impact than many individually pursued interventions. Some argue for the former because of the complexity underlying social problems. A good example would be a TB program in which public health organizations, homeless shelters, law enforcement agencies, and hospitals work together as partners to control the spread of an increasingly drug-resistant disease. Our ability to understand the complexity underlying social problems, however, has its limits. And our ability to coordinate activities across multiple organizations also has limits. So it is possible that in practice collective impact–which at a conceptual level makes a great deal of sense–may not live up to its promise. On the other hand, neither may individually pursued interventions. There is a growing belief that markets for funding and/or customers of double-bottom-line organizations (those with both financial and social missions) might impose a structure or discipline that increases the collective impact of organizations. It is an interesting alternative to the coordinated approach that, alas, has just as little evidence. I agree that in the next few years there will be more coordinate/collective/complexity-driven approaches to evaluation. Whether that turns out to be an improvement over what is currently being done is much harder to predict.