“Stop judging science by results”, said Dr. Chris Chambers, professor at the Department of Psychology, Cardiff University at the plenary session of UK Conference of Science Journalists, 2014, while stressing the need for reproducibility in science. Referring to the scientific method as being ‘gamed’, Chris strongly felt that the current cycle of peer-reviewed publications needs to be replaced by a process which ensures acceptance of a study before the results are obtained. In this way, the proposed methods and collected data could be reviewed, and the results would not determine the fate of publication of the study. This could be one way of avoiding the ‘cherry-picking’ of data and manipulation of results, which lead to lack of reproducibility in science. Furthermore, open data and critical reporting of science were said to be essential for keeping science honest.

Deborah Cohen, investigations editor at the BMJ, also stressed the need for being critical while reporting science and to keep asking questions.”If in doubt, ask for the data”, she said, while urging journalists to ask researchers their original aims and also to investigate why certain questions are not being asked at all.  Health sciences, according to her, have a lot of issues and should be investigated more thoroughly. She suggested checking Pubmed to see what has already been published about the topic at hand. In this age of digitized publications, it is extremely easy to compare new findings to old literature, and there should be no excuse for missing out on information for replicating old studies.

However, reproducing scientific studies is seen to be a growing problem. Ivan Oransky, vice president and global editorial director of MedPage Today, expressed his shock at the lack of reproducibility in science and gave an insight into incorrect scientific methods and rising retractions. Pointing to a few articles published in leading newspapers such as The New York Times, Ivan showed how journalists have misrepresented scientific information in the past, leading to bizarre headlines.

Besides, some scientific studies have also been conducted in an incorrect manner, either missing out on confounding factors or by considering a very small sample size, for instance. As a result, retractions are on the rise since, with more than a tenfold increase in papers retracted during 2001-2010 – a disproportionate amount when compared to the numbers of papers published. Ivan also gave an example stating that if 5000 compounds started out for the market, only about five would make it to the clinical trials, and then only one would be likely to get FDA approval.

A healthy discussion followed with the audience, with many important issues being talked about. The possibilities of increased pressures on scientists to get publications and funding were seen as likely causes of fraud in science. In some cases, scientific work may be dependent on certain funding agencies or individuals, which may result in financial interests causing distortion of the results. There will always be uncertainty behind the absolute truth of any research, but certain measures could be taken to improve the situation. As John P. A. Ioannidis explains in his paper, looking at large-scale evidence and reducing bias could be steps to take. Overall, it was seen that post-publication peer review could be a good option to regulate published studies, and scientists should certainly aim at becoming self-regulators of research.