Humans are not naturally statistical thinkers. Although we bumble through life in a probabilistic universe, a lot of our decisions are informed by our own biases, our peers and what we’ve done in the past. Time and again, researchers have demonstrated the human inability to behave rationally or weigh up risks accurately.

Science journalists are people too, and we struggle with statistics as much as anyone. But, unfortunately, we cannot escape having to grapple with the subject. Part of our job – perhaps our duty – is to understand, engage with and translate these tricky, abstract concepts.

The Statistics in Science Journalism session at UKCSJ 2014 was a head-on collision between passionate journalists and the confusing monstrosity that is statistics. Deborah Cohen, the BMJ's investigations editor, produced this session to help us understand how not to get things wrong.

Ivan Oransky, vice president of MedPage Today and co-founder of the excellent Embargo Watch and Retraction Watch blogs, led proceedings by taking us on a slide-by-slide journey through a realm of shoddy studies and equally shoddy reporting. With righteous vigour, like the George Costanza of statistics, Oransky demolished the failures in reporting. He showed us how the journalists in question could have written better, balanced coverage just by probing a little deeper, remaining sceptical and – what you think would be an obvious step – reading the study carefully.

“I think it’s journalistic malpractice to not have the full study in front of you when you’re reporting,” Oransky said. Academic articles can be awful beasts but we must not fear tackling them. There are resources out there to help us along. Jennifer Raff, for one, has written a wonderful guide to reading scientific papers.

Oransky advised us to ask "dumb questions" so we don't end up with notebooks full of jargon, and warning us that "if you don't ask the questions you think are dumb, you'll look dumb to your readers."

Throughout the session, we accumulated a long list of 'dos', don'ts' and caveats for reporting on studies. Oranksy promised his presentation would soon appear on SlideShare.

Of the many very important points raised, I want to touch on one: the difference between statistical significance and clinical significance. Just because a drug passes some statistical tests doesn’t mean it passes muster in the real world. We should be mindful of whether a treatment has an actual effect on people’s lives.

As journalists, we are not expected to be experts but we must make sure we ask the people who are experts in order to understand these difficult, wayward concepts. Oransky suggested we keep a biostatistician among our key contacts and treat them to lunch from time to time.

Cohen and Oransky covered a glut of topics, each worthy of their own session, if not their own conference. To name just a few: medicalisation and 'disease-mongering' (Oransky: "I challenge anyone in this room to not have social phobia"); clinical trial design; the cost of new treatments and why that information is so hard to find; the politics and personal biases of medical researchers; should journalists debunk or just ignore bad studies?

This session made it clear that science journalists must not be afraid to grapple with the jargon of statistics.

Statistics is hard and strange and requires constant vigilance to not muck things up. But, in a week where David Spiegelhalter was knighted for 'services to statistics', even the Queen thinks it's important.