Washington Post:”…Reproducibility is a core scientific principle. A result that can’t be reproduced is not necessarily erroneous: Perhaps there were simply variables in the experiment that no one detected or accounted for. Still, science sets high standards for itself, and if experimental results can’t be reproduced, it’s hard to know what to make of them.
“The whole point of science, the way we know something, is not that I trust Isaac Newton because I think he was a great guy. The whole point is that I can do it myself,” said Brian Nosek, the founder of a start-up in Charlottesville, Va., called the Center for Open Science. “Show me the data, show me the process, show me the method, and then if I want to, I can reproduce it.”
The reproducibility issue is closely associated with a Greek researcher, John Ioannidis, who published a paper in 2005 with the startling title “Why Most Published Research Findings Are False.”
Ioannidis, now at Stanford, has started a program to help researchers improve the reliability of their experiments. He said the surge of interest in reproducibility was in part a reflection of the explosive growth of science around the world. The Internet is a factor, too: It’s easier for researchers to see what everyone else is doing….
Errors can potentially emerge from a practice called “data dredging”: When an initial hypothesis doesn’t pan out, the researcher will scan the data for something that looks like a story. The researcher will see a bump in the data and think it’s significant, but the next researcher to come along won’t see it — because the bump was a statistical fluke….
So far about 7,000 people are using that service, and the center has received commitments for $14 million in grants, with partners that include the National Science Foundation and the National Institutes of Health, Nosek said.
Another COS initiative will help researchers register their experiments in advance, telling the world exactly what they plan to do, what questions they will ask. This would avoid the data-dredging maneuver in which researchers who are disappointed go on a deep dive for something publishable.
Nosek and other reformers talk about “publication bias.” Positive results get reported, negative results ignored. Someone reading a journal article may never know about all the similar experiments that came to naught….(More).”