Interesting discussion at "Crooked Timber" about a recent study that confirms what every professional working in the field knows, but few are willing to talk about -- political "science" journals are only willing to publish studies that yield statistically significant results. This leads to a bias among researchers who are only willing to report publishable studies, in other words those that yield statistically significant results. As any honest researcher [and there are still a few of them out there] knows, negative findings are important, if only to provide a context for the significant results, and should be reported, but unless there is a major issue at stake, they usually aren't. The article also notes a similar bias in bio-medical research.
Here's why this bias is important to note:
(An important fact about statistics is that everything has a distribution, including expected results from multiple clinical trials: if you try often enough you will get the result you want just by chance.) If there are enough published trials, techniques like meta-analysis can help reveal the number of “missing” trials—the ones that were done but not published, or just not done at all. [emphasis mine]Read the whole thing here.
As I have noted before, in "science" publication is the key to career advancement, securing funding, etc. So a selective bias on the part of journals influences the kind of studies that are undertaken and thus biases the entire "scientific" enterprise.
UPDATE: One of the readers' comment points to another problem -- the study shows a disproportionate number of reported results near the 5% level.
This strongly suggests that there has been model-dredging; that a lot of borderline models have been effed around with until their standard errors pass a 5% t-test. This undermines the validity of the t-test, because it makes it clear that the t-ratio isn’t actually draw[n] from an underlying t-distribution – it’s a draw from a distribution of numbers that has a lower bound of 1.96, because the model is messed around with until a specification that gives “significant” results is found.Ah yes, "massaging" the data until it yields a statistically significant result -- I remember it well from my days in social science research. It was a pervasive problem then and apparently still is. I like that term, "model-dredging".
Here's a long, but useful, discussion of the problem of reconfiguring tests until they yield statistically significant results. Warning, some familiarity with basic statistical methods required, even for the "discussion for laymen" section.
Here's a really lovely discussion from PLoS of the bias problem in medical (and other scientific) research. The title is, "Why Most Published Research Findings are False." Here's the key finding from the summary:
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.Read the whole thing here.