How one psychologist is tackling human biases in science.
By Philip Ball Illustration by Carmen Segovia May 14, 2015
Sometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.”1 Written by John Ioannidis, a professor of medicine at Stanford University, it didn’t actually show that any particular result was wrong. Instead, it showed that the statistics of reported positive findings was not consistent with how often one should expect to find them. As Ioannidis concluded more recently, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.”2
It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions. “Seeing the reproducibility rates in psychology and other empirical science, we can safely say that something is not working out the way it should,” says Susann Fiedler, a behavioral economist at the Max Planck Institute for Research on Collective Goods in Bonn, Germany. “Cognitive biases might be one reason for that.
one of the reasons i posted this was that ball referenced retraction watch and more importantly this
Oransky believes that, while all of the incentives in science reinforce confirmation biases, the exigencies of publication are among the most problematic. “To get tenure, grants, and recognition, scientists need to publish frequently in major journals,” he says. “That encourages positive and ‘breakthrough’ findings, since the latter are what earn citations and impact factor. So it’s not terribly surprising that scientists fool themselves into seeing perfect groundbreaking results among their experimental findings.”
Nosek agrees, saying one of the strongest distorting influences is the reward systems that confer kudos, tenure, and funding. “To advance my career I need to get published as frequently as possible in the highest-profile publications as possible. That means I must produce articles that are more likely to get published.” These, he says, are ones that report positive results (“I have discovered …”, not “I have disproved …”), original results (never “We confirm previous findings that …”), and clean results (“We show that …”, not “It is not clear how to interpret these results”). But “most of what happens in the lab doesn’t look like that”, says Nosek—instead, it’s mush. “How do I get from mush to beautiful results?” he asks. “I could be patient, or get lucky—or I could take the easiest way, making often unconscious decisions about which data I select and how I analyze them, so that a clean story emerges. But in that case, I am sure to be biased in my reasoning.”
and nosek was inspired do something about it
Nosek has since devoted himself to making science work better.10 He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations. “Fighting these issues isn’t easy, because they are cultural challenges—and no one person can change a culture,” he says. “So I started with the issue that I could control: the power of my research designs.”
Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology. It is precisely because these problems are so manifest in the pharmaceutical industry that this community is, in Nosek’s view, way ahead of the rest of science in dealing with them. For example, because of the known tendency of drug companies and their collaborators to report positive results of trials and to soft-pedal negative ones, it is now a legal requirement in the Unites States for all clinical trials to be entered in a registry before they begin. This obliges the researchers to report the results whatever they say.
Nosek has instituted a similar pre-registration scheme for research called the Open Science Framework (OSF). He had planned it for many years, but it really took off when former software developer Jeff Spies joined his lab in 2009-2010 and took it on as a dissertation project. “Lots of people got involved and it became a much bigger thing pretty quickly,” says Nosek. “We started a website for the OSF, and a community—and funders—gathered around it.” Nosek and Spies cofounded the Center for Open Science in Charlottesville in 2013, which now administers the OSF and is able to offer its services for free.
The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan. It sounds utterly elementary, like the kind of thing we teach children about how to do science. And indeed it is—but it is rarely what happens. Instead, as Fiedler testifies, the analysis gets made on the basis of all kinds of unstated and usually unconscious assumptions about what would or wouldn’t be seen. Nosek says that researchers who have used the OSF have often been amazed at how, by the time they come to look at their results, the project has diverged from the original aims they’d stated.
There are plenty of people here who I do agree with on many points. But the dominant viewpoint has systematically shut down all discussion between those who agree on things contrary to a certain point of view. This tribe has become very intolerant of any discussion, no matter how amicable between parties that doesn't go along with that certain point of view.
You and others are trying to turn this place into FB where only those who agree with certain views are allowed to participate. You wish there was an ignore button or a way to delete posts that disagree with yours. Just wishing for that is a demonstration of intolerance. But there isn't. This is like meat space where just because there may be disagreement on matters it doesn't end up in ad hominen shouting matches. Civilized people know how to peacefully disagree, else there would be anarchy.
What is different from an intolerant point of view and ISIS ? Convert or die is what you are asking. Fall in line or leave, we don't like your kind of thinking, regardless of its merits. We disagree on things, but I'm not telling you to shut up or choke on pixie dust.
Oh and yes I had a good holiday, thanks for asking. Hope yours was enjoyable as well.
peace, out ...
No, that's really not how it is. It may not jibe with the way you want it to be, but it's not the dissenting opinion that puts people off, it's your insistence on being an asshat. Plenty of people here disagree about plenty of topics. A couple are heated enough that a lot won't participate. Most of the ground has been trod so many times it's probably not worth going over again. But people don't hate you because your a republican (sure you're not, teaparty, randian, whatever), people hate you because you are an asshat.
I'm sure you think I'm being an asshat as well, and I am. It's cause you won't take a hint. Lots of people will criticize me for jumping on you now, I can feel the PMs headed in. But JFC you can be insufferable and when you are like this it takes a big stick to whack you back to tolerable. I'm not asking you to leave, just to leave your lunacy. No, I don't have moderator points, but I do have the right to call you on it when I see you doing it over and over again. You don't have to listen, but you might be taken a little more seriously if you did.
I've said my peace, I'll go on my way now. But expect to hear it again in a couple months if/when you rile up again.