_PLoS Medicine_ has a fascinating article by John P. A. Ioannidis that argues that in an era where all research must establish, almost *a priori*, its “significance,” that we in fact have ended up with research that is insignificant. The problem, as I understand it from my reading, is that too many scientists — and the window onto the scholarly world is open here, I think — are required to be productive in ways that bureaucracies can “measure.” Thus, the race is on *toward* smaller studies that are easily commoditized into publications and *away* from larger studies which either require years to produce results or have too many collaborators for credit to be pieced out in ways that institutions like.
> There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
Here’s the official citation:
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124