How did a fake study make it into Science magazine?

A study published in Science magazine suggested that attitudes toward same-sex marriage were more likely to be changed by face-to-face conversations with gay canvassers over straight ones. But now that study has been redacted, spurring questions about how scientific research is published. Hari Sreenivasan talks to Ivan Oransky, co-founder of Retraction Watch, who broke the story.

Read the Full Transcript


    Next, we explore questions about how scientific findings are published and verified, and whether allegations of fraud involving a top science journal are damaging credibility with the wider public.

    Yesterday, "Science" magazine retracted a study published in December that found people's attitudes toward same-sex marriage were more likely to be changed by face-to-face conversations with gay canvassers over straight ones. It was a study that got quite a bit of pickup in the media. Now that it's been retracted by a leading journal, it poses questions for the scientific establishment.

    Again to Hari, who has more on the story.


    The study's lead author asked for the retraction after the original findings could not be duplicated, and his co-author, a graduate student, was accused of misrepresenting how the work was done.

    This is the latest retraction in a major journal. In recent years, there have been others involving cloning and stem cell research.

    Ivan Oransky is a journalist, as well as a medical doctor, who broke this story. He is co-founder of the blog Retraction Watch and global editorial director of MedPage Today, an online medical news service for physicians and other medical professions.

    So, first of all, this particular case, how did we get here? What went wrong?

  • IVAN ORANSKY, Co-Founder, Retraction Watch:

    So, what seems to have gone wrong is that only some of this study — or at least we can only see that some of this study actually happened.

    Lots of pressure on researchers. We don't exactly know what happened here in the sort of early days, but part of the study, which was that gay people went to people's houses and tried to convince them that gay marriage was a good thing, that they should agree with it, that part seems to have happened.

    What's a little unclear is whether surveying them afterwards to tell whether you actually changed their minds, which in this case was a pretty important part of the study, whether that actually happened. And so you fast-forward a little bit. The paper gets published in a really big journal, as you said, in "Science," a major medical — excuse me — a major science journal.




    And that happens in December, and then a couple of month later, some grad students at Berkeley, they decide, oh, we want to do the next set of experiments. We think this is pretty cool.


    And this is science works.


    That's supposed to be how science works.

    So, they start looking at it and something doesn't look right to them. They start asking a lot of questions, which, again, is supposed to be the way science works, ask the lead author, hey, what's actually happening here? And no one can find the data.

    Some admissions were made about what had happened and what hadn't happened and how it had been misrepresented. And very quickly, which is I think an important point here, very quickly, the author said — one of the authors said, we should retract this. The journal said, we're going to put a big stamp on this saying expression of concern.

    Within a week, and that just happened this week, the paper's gone from the record. It's retracted.


    Yes, but people are going to look at this and say but there is supposed to be a system of checks and balances before it gets to the journal, at the journal, the peer review process. We have esteemed colleagues. There's lots of smart people that could have poked a hole in this before you got to it.


    There are lots of smart people who can poke a hole in it if they sort of take the opportunity.

    Scientists are under a lot of pressure. You and I, as journalists, we're under a lot of pressure. We know what this is like. And, quite frankly, peer review, it's something you do, I wouldn't say exactly spare time, but you're not paid for it. And so in order to have found what was wrong here, you really would have had to have seen, actually had to have looked at the original data.

    And what most people don't realize is that this sort of Good Housekeeping Seal of Approval that journalists would like you to think peer review, this vaunted peer review system is, it's not really Good Housekeeping Seal of Approval. There are a lot of holes. You have got to look at the original data.

    And that speaks to how science is supposed to work, because you shouldn't take any particular study, in this case a study that actually showed something that was really very surprising and new and different from what other studies had shown. You shouldn't take it, even if it turns out to be true, as the answer.


    So there are different causes for why certain studies over the years have fallen through the cracks. Right? Sometimes, it's malicious intent, someone actively trying to doctor the data. Other times, it's careless error, et cetera.

    Are these fabrications more common now or are we in this Internet era able to detect them faster?


    It's very clear that we're able to detect these, sort of whether they're fraud or just sloppiness or honest error, much more quickly.

    Here we are, we're able to look at all these papers online. We have plagiarism detection software. Plagiarism is a big reason for retraction as well in science, as well in journalism. And so about two-thirds of the time, they're due to fraud, something that would be considered misconduct.

    But it's very clear that in the last like 15 years, the number of retractions has gone up by 10. So there are 10 times as many retractions — there were times as many retractions in 2010 as in 2001. And, again, it's because we're better at finding it.

    Whether there's also more pressure on scientists and more — therefore more fraud, it is a bit of an open question. But it's also important to keep in mind these 400 — and maybe now it's 500 or 600 retractions a year — that's out of like two million or three million papers. So let's not sort of say, oh, well, everything is fraudulent just because this is on the rise.


    Right. This is something that scientists have to now become more vigilant about.

    But also who gets hurt by this? In this specific study, it's kind of a social science experiment. But there are some ethical lines that have been crossed.


    So, in this particular case, I think one sort of person — it's not a person, but a group that might take a hit is science itself.

    Here we are talking about this study, what went wrong, why did this get into such a major journal. My understanding is that some of the findings here, the sort of — at least the idea, was used as part of the sort of canvassing on the referendum in Ireland that just happened.

    So this actually had some real-life ramifications. Maybe it was not a cancer trial or something like that. But, often, some of these studies actually do involve real people who are having terrible diseases like cancer. And they all are — I shouldn't say all, but many of them involve federal funding.

    So you and I are paying for these studies, and then they turn out to be fraudulent. Well, that's not a great thing.


    All right, Ivan Oransky, editor of Retraction Watch, thanks so much for joining us.


    Thanks for having me.

Listen to this Segment