I recently participated in a panel discussion at the annual meeting of the California Postsecondary Education Commission (CPEC) for recipients of Improving Teacher Quality Grants. We were discussing the practical challenges of conducting what has been dubbed scientifically-based research (SBR). While there is some debate over what types of research should fall under this heading, SBR almost always includes randomized trials (experiments) and quasi-experiments (close approximations to experiments) that are used to establish whether a program made a difference.
SBR is a hot topic because it has found favor with a number of influential funding organizations. Perhaps the most famous example is the US Department of Education, which vigorously advocates SBR and at times has made it a requirement for funding. The push for SBR is part of a larger, longer-term trend in which funders have been seeking greater certainty about the social utility of programs they fund.
However, SBR is not the only way to evaluate whether a program made a difference, and not all evaluations set out to do so (as is the case with needs assessment and formative evaluation). At the same time, not all evaluators want to or can conduct randomized trials. Consequently, the push for SBR has sparked considerable debate in the evaluation community. Continue reading