Reproducibility in Science

Science is in crisis, they say. Negative results don't get published, while gibberish occasionally does; shaky studies are under-powered and over-reported; peer reviewers miss obvious mistakes and accept results that agree with their biases, regardless of merit; field-defining results cannot be replicated.

The current culture of 'publish or perish' doesn't help matters. A scientist's worth is judged based on how many papers they publish, how many times those papers are cited, and how much money they pull in.

Scientists, science journalists and others are beginning, however, to rage against the machine. Post-publication peer review allows many more eyes the chance to dismantle papers after the fact. Pre-registration of studies and pre-data peer review may help to shift the focus away from novel results as a marker of quality.

Science is meant to correct and regulate itself and perhaps this is just what it's doing, albeit very publicly. The final plenary of UKCSJ 2014 looked at the role science journalism can play in this process and how we fit in to the movement to improve the practice of science.

"The system is being gamed," according to Chris Chambers, a cognitive neuroscientist and science writer who is actively involved in the movement to fix science. He spoke with passion and erudition about what's wrong and what we can do.

For starters, we must stop valuing science by its results. A high-quality study is not necessarily an interesting one. That, then, poses a conundrum for science journalists, whose job is to get people excited about interesting things (in an even-handed and sceptical manner, of course).

Sharing data should be mandatory. Journals should publish replications, especially those of a novel study they originally published – what Chambers referred to as the 'Pottery Barn rule' of "you break it, you buy it." Finally, he called for more investigative science journalism, arguing that no system is truly capable of regulating itself.

Next to speak was Ivan Oransky, vice president and global editorial director of MedPage Today, who, by this point, was on his third panel appearance of the day. Oransky's challenge to us was: are we comfortable being wrong? Are we really "shocked, shocked" to find a lack of replication going on in science?

During his talk, the sounds of dragging tables rumbled and screeched in the room below us – possibly a loose metaphor for the painful changes the scientific community is undergoing.

In closing, Oransky advised us to be cautious, suggesting we favour a slower form of journalism. He said the “smartest take” on a topic - rather than instant, uncritical news coverage - tends, in the long run, to do best in terms of viewing metrics.

In an ideal world, I would fully agree.

Finally, Deborah Cohen, whose job is to scrutinise medical studies, asked us to ask scientists why they are researching a particular topic right now. Why ask this question? Why, in some cases, are other questions not being asked?

At points, I got the feeling Cohen wanted to say far more than she allowed herself to do. Speaking coyly about research on diabetes drugs and pancreatic cancer, she remarked: “You can see a problem or not see a problem, depending on how you dissect.” Perhaps this also pertains to the issues facing science journalism, and science.

Ensuring, say, pre-study registration gains widespread acceptance relies on other aspects of science aligning at the same time. As Chambers mentioned, moving the focus away from novel results would be a start, but novelty is what currently gets published, and publications lead to grant money and job security.

Altering such an entrenched, lumbering, almost fossilised system will be hard. It's comforting to know that passionate, experienced obsessives like the members of this panel are trying to shift the culture of science for the better.

Matthew Gwynfryn Thomas can be found at matthewgthomas.co.uk