The AEA conference has been great. I have been very impressed with the presentations that I have attended so far, though I can’t claim to have seen the full breadth of what is on offer as there are roughly 700 presentations in total. Here are a few that impressed me the most.
A Clever Experiment
Among the many quantitative presentations I attended, a highlight was the presentation made by Ali Protik in which he described an evaluation of the Talent Transfer Initiative (TTI) that he is currently working on with his colleagues at Mathematica Policy Research. The evaluation has a very imaginative design—when teacher positions become vacant in low performing schools located in a number participating school districts, the vacant positions are randomly assigned to one of two hiring procedures. The first is business as usual (that is, teachers are hired as they normally are in that district), and the second is a specific procedure for hiring teachers who are in the top 3% of effectiveness within the district (based on three years of value-added scores). In the second case, principals decide whether to offer the position to a high performing teacher, and teachers decide whether to accept it (and a hefty financial bonus is used to incent them to accept). By doing this, it is possible to untangle how much of teacher effectiveness was driven by the local context versus qualities intrinsic to the teachers. Very clever. I look forward to learning about the results, which may be available at next year’s AEA conference.
Complexity
A recurrent theme at the conference was the application of systems thinking to evaluation. Bob Williams, Patricia Rogers (of Genuine Evaluation fame) and Richard Hummelbrunner did a great job of demystifying the topic and relating it to evaluation as it’s currently practiced. Patricia described how programs are comprised of simple (well understood, few elements, and highly predictable), complicated (well understood, many elements, potentially less predictable), and complex (not well understood, unknown number of elements, and unpredictable) pieces. If we treat the entire program as if it were simple, the program and our evaluations are threatened by what is unpredictable—the complexity. Some of that complexity is driven by the dynamic nature of the context in which the program operates, and some by an incomplete, emerging understanding of the program and its environment. Richard followed by describing how Logical Frameworks (or LogFrames), a tool that many believe lures us into believing that programs are simpler than they really are, can be modified in a way that takes complexity into account. Bob concluded by providing examples of how systems models can be used to describe the complexity of different facets of a program. I believe there is something in all of this that can help evaluators think about their work—or at least how I think about my work—and I look forward to reading Bob and Richard’s new book on the subject.
Developmental Evaluation
Michael Quinn Patton has a new book called Developmental Evaluation and he made a number of presentations that drew on its core concepts. In short, if we accept that we are working in a complex, dynamic environment, what do we do about it? How do we lead organizations through the complexity in ways that build learning, effectiveness, and the capacity to adapt? These are big questions, and what I like about it (or at least what have read so far) is that Patton starts from a perspective that reflects the messiness of practice in the real world. I think this is another important read, and one that I hope to comment about in future posts.
The Logic of Evaluation Theory
Maybe you need to be an incurable evaluation wonk to appreciate this, but a group of mostly UCLA-based researchers and students (Marv Alkin, Tanner LeBaron Wallace, Anne Vo, Lisa Dillman, Rebecca Luskin, and Timothy Ho) have been taking the writings of evaluation theorists, analyzing these texts, and using the results of that analysis to build logic models that describe evaluation theories. Logic models of evaluation theories!?! I didn’t know I could love something this much that wasn’t fattening. Making side-by-side comparisons of these visual representations is extremely illuminating, and it crisply reflects the differences in emphases and values of theorists. Very cool.
Off to another presentation…More in future posts…
Pingback: Evaluation 2010 Post-Conference Links | Eval Central