Leave aside rampant fraud, incompetence, and bias that have infested the scientific enterprise, what about the good science -- you know, the stuff performed by competent, reasonably objective professionals and verified by repeated confirmatory observations? Well, that can't be trusted either.
Writing in the New Yorker, Jonah Lehrer argues:
The test of replicability, as it’s known, is the foundation of modern research. It’s a safeguard for the creep of subjectivity. But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts are losing their truth. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
....
The most likely explanation for the decline is an obvious one: regression to the mean. Yet the effect’s ubiquity seems to violate the laws of statistics.... Biologist Michael Jennions argues that the decline effect is largely a product of publication bias. Biologist Richard Palmer suspects that an equally significant issue is the selective reporting of results—that is, the subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.... In the late nineteen-nineties, neuroscientist John Crabbe investigated the impact of unknown chance events on the test of replicability. The disturbing implication of his study is that a lot of extraordinary scientific data is nothing but noise. This suggests that the decline effect is actually a decline of illusion. Many scientific theories continue to be considered true even after failing numerous experimental tests. The decline effect is troubling because it reminds us how difficult it is to prove anything.
Read the whole thing
here.
And Lehrer expands on this argument in Wired [
here]:
For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
This is important stuff because it calls into question the bedrock of modern scientism -- the "scientific method" itself. It is upon this rock that claims to scientific authority are grounded, but more and more the rock is beginning to look like a pile of shifting sand.
UPDATE:
Games With Words does an extensive deconstruction of Lehrer's article that discusses the sources of error -- publishers' bias; confirmation bias; a tendency to fudge figures; careerist incentives to publish false data; the fact that most scientists are piss-poor statisticians; etc. In some ways it is an illuminating piece, but in the end it doesn't repudiate Lehrer's basic point -- that much of what is reported as scientific truth turns out on close examination to be questionable at best.
Read the article
here.
The
Games With Words article notes that with regard to error in scientific publications we are operating on faith. We assume that the proportion of unreplicable results is low because if it is high scientists might as well "question the meaning of [their] own existence". Well, according to David Freedman, writing in the
Atlantic, notes that "Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong." and wonders "why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?"
Read the whole disturbing article
here.
And, check out Daniel Engber's piece in
Slate on the inadequacy of the peer review process [
here].
No comments:
Post a Comment