Session reviews, audio and presentations

This page is still under construction so do check back if the audio, session review or presentation you are seeking is not yet available

View the full programme

All session reviews are written by student scholars who attended the UKCSJ18 with funding from UK Research and Innovation.   The ABSW Board provided individual editorial guidance and feedback to the scholars on the articles as part of the scholarship.

UKCSJ18 Sessions

Overview

Keynote speech

Algorithmic bias and accountability

Trends Sessions

Digital innovation trends in media: hype versus reality

Artificial intelligence and science writing bots in newsrooms

Virtual and augmented reality applications in immersive science journalism

Fact checking in science media production

Breaking barriers and embracing innovation

Tools & Skills Sessions 

Get your mojo on with mobile journalism

Data journalism, visualization and interactives

Cooperation or conflict? Creative tension between writers and editors in crafting feature-length stories

Sensor and drone journalism: how, why and when to use them

What does success look like for a freelancer?


Overview by Bojan Ambrozic

This year’s science journalism conference took delegates on an exciting tour of the latest gadgets and technologies for pursuing stories, and demonstrated the fast pace of change over the last decade. For example in the not so distant past (10-15 years ago) journalists who wanted to make video interviews outside their studios needed the whole array of equipment and production crew, such as director, professional camera operators, and sound technicians. After the video had been recorded, the next step was video editing, which again took additional time and resources. If the interviews were broadcasted live on TV, the journalist additionally needed vehicles specially designed for live broadcast equipped with full radio and telecommunication equipment. Not only did such vehicles cost in the order of £100,000 or more, but also every minute of such a broadcast cost several tens of pounds per minute. That is a costly resource that can only be granted by big media. On the other hand in recent years with the introduction of ever better smartphones, mobile journalism appeared. 

Mobile journalism is a new technique where journalists are using smartphones as their primary tool for the recording of video content.  Mark Egan, Mobile Journalism, Trainer claimed that video recorded with cameras on today’s premium class smartphones is good enough for television. That is unfortunately not the case for audio, where an external microphone connected to the smartphone easily solves the problem. Tripods and specially designed gimbals enable stable recording even when the object is moving. Specially designed apps significantly reduce the time needed to edit the video. At the same time, 4G and the upcoming 5G mobile network enable live broadcast for virtually no cost. None of the required equipment for mobile journalism is expensive, allowing almost anyone to become a journalist.

Due to the new, cheaper IT technologies journalism is no longer limited only to the solid ground. Very expensive helicopter shoots have been replaced with drone video, which was also one of the topics of the conference. The journalism community greatly acknowledges drone technology. However, due to the cases of misuse of the drones in the past, journalists in almost every country are now required to get special permission from the aviation authority before they are granted permission to use them in public. The rules about drone use vary considerably between countries. However common European regulations are in preparation.

Also, there was some debate about how to better present scientific data to the general public since it is widely known that people, in general, do not easily understand or like graphs and spreadsheets. One of the solutions is presenting the data more visually.  The conference speakers suggested the use of virtual and augmented reality, both of which easily enable 3D visual representations of complicated scientific phenomena, for example climate change and the environmental impacts of a river dam on the surrounding ecosystems. 

It can be concluded the conference was exciting, since many new trends in the journalism industry where thoroughly discussed and debated.

 


Alogrithmic bias and accountability
Keynote session review by Thomas Hornigold

Governments are increasingly using machine learning to process data and make decisions about who gets jobs, who gets parole, and where emergency services head first. In theory, a computer makes decisions based purely on the data, free of human biases. But in reality, algorithms are designed by people, and draw their datasets from a biased world.

Nick Diakopolous, Director of the Computational Journalism Lab at Northwestern University, is part of a growing number of researchers looking into algorithmic bias and accountability. They want to avoid a world where mysterious black-box algorithms make inexplicable, unfair decisions, affecting people’s lives, with no-one to hold accountable.

Facebook’s algorithm had the power to shift voter turnout by changing the hard news content in their feeds during the 2012 election, according to research presented by Diakopolous. With that kind of raw power, misuse could have serious consequences.

Diakopolous’ own research exposed that Uber’s algorithm had a substantially worse expected wait time in less affluent, less white neighbourhoods. Other cases, such as the racial discrimination ProPublica found in criminal justice algorithms, are far more damning.

For journalists, Diakopolous sees a new trend: “The Algorithms Beat”, where public-interest stories can arise from algorithmic misuse. He identifies four main categories of story where algorithm use can become newsworthy.

The first is discrimination or unfairness, like that described by ProPublica for Broward County’s recidivism scores. Here, black defendants were routinely, unfairly, rated as more likely to reoffend. The algorithm may learn from societal data and internalise the biases of the society. But it has a very limited capacity to explain how its scores are calculated.

Then there are algorithmic errors. These might be due to incorrect data, or an unexpected input. The perceived infallibility of the algorithm, and the lack of human oversight, lets mistakes persist: such as graphic videos that slip through YouTube’s content filter.

The third possibility is that the algorithm functions in such a way that violates social norms, or even laws. Standing outside a closing factory advertising payday loans is socially unacceptable, but algorithms might view it as optimal.

Finally, the algorithms may not be entirely to blame: human misuse can lead to injustices. In the Broward County risk score case, the authorities used only one version of the algorithm – the one calibrated for men – to assess risk, leading to unfair treatment of women.

Evidently, as Diakopolous argues, there are ample fascinating and crucial public-interest stories that arise for journalists to tackle on “the Algorithm Beat.” But how to obtain them?

Freedom of Information requests may allow groups to obtain raw code for an audit. This has important, but limited uses. A simple coding error might be detectable: but inferring how an algorithm will actually function in the field, just by reading the code, is tricky. The behaviour of these systems can be opaque even to the designers.

An alternative approach, then, relies on “reverse-engineering” the algorithm and keeping track of these unusual behaviours.

In some cases, it might be possible to test the algorithm yourself. Set up a fake Facebook account, posing as a particular person: can you change the type of advertisements you get? Here, you provide inputs to the algorithm, measure the response, and infer the behaviour. In Diakopolous’ lectures, he demonstrates this approach by having every student opening the Uber app at once – and watching the price skyrocket.

There are complicating factors to covering the algorithms beat. The sheer volume of data can be an issue. Companies like Facebook or Google prize the unique algorithms they have, and the data they’ve gathered on consumers: they are unlikely to be particularly willing to release this propriety information to prying eyes. This can make it harder for journalists to frame a clear narrative and hold people or corporations responsible.

Algorithms are not static: Google’s changes 600 times a year. Their behaviour might not be consistent. Finally, linking the story back to the humans who might ultimately be responsible can be difficult.

Nevertheless, as more and more decisions are automated, Diakopolous persuasively argues that the Algorithm Beat will only grow more important.

A new organisation, Algorithm Tips, keeps track of all potentially newsworthy algorithms in use by the US government. They’ve amassed a list of more than 250 so far. As big data increasingly becomes fodder for these machine learning algorithms, data journalists must develop the skills necessary to identify and pursue these leads – and get a handle on how our world is being shaped by black boxes and lines of code. 


Driving Innovation in News

Session review by Phoebe Hinton-Sheley

Artificial Intelligence

In discussing the applications of artificial intelligence to science journalism, Robert McKenzie, an editor at BBC News Labs, began with a quote from Larry Tesla: “Intelligence is whatever machines haven’t done yet,”

The future of news media, he suggested, will be slowly going back to talking to the audience, rather than the audience reading information online. The BBC's Scalable Understanding of Multi-Media (SUMMA) project is working on using speech-to-text technology to transcribe and translate speech.

SUMMA can also assist in organising news stories/articles into topics, via tags. In this ‘story clustering’, similar stories are grouped together to make a visual interface that summarises current world news. This project is not intended to replace human journalists, however, but to make their lives easier and bring news stories to more people. For example, grouping tagged stories makes it easier to see connections between current and previous news stories so that new stories can be released more quickly. Clustering can also help in spotting ‘fake news’ in the media.

In another example, the new programme VOICE is being introduced in response to the rise of smart speakers such as Amazon Alexa and Google Home. By this Christmas, Up to 100 million smart speakers will have been sold by this Christmas, and 18% of American households have at least one smart speaker in their home. BBC Labs has discovered that young people find the current news voice “off-putting”, so making use of this technology will require improvements.

The Post-Truth Era of Alternative Facts

Stewart Allan, the head of the School of Journalism, Media & Culture at the University of Cardiff, summarised the current climate surrounding public journalism as “a post-truth era of alternative facts in which reality is irrelevant”. As an example,  he mentioned the change in US President Donald Trump’s views on climate change from negative to questioning whether man-made climate change is actually happening.

Nowadays, the news can be easily distorted. For example, Photoshop can be used to edit photographs to a point where the new image looks incredibly real. On social media, sharing is on the rise of entirely fabricated stories from websites masquerading as reputable news sources as if they are 100% true. This phenomenon spreads misinformation both online and in real life. Sometimes, a lie can be more inviting than the (usually harsher) facts. This is not new: the “Great Moon Hoax” of 1835 was released in The Sun newspaper in New York along with false drawings of “winged men” that allegedly lived on the moon. Many of the public were fooled.

Allan believes that journalism needs to rethink the “new media ecology”. Certain scientists have long believed that knowledge of science is a civic responsibility, and that, as Lancelot Hogben, said in his 1938 book, Science for the Citizen, “The world cannot be run by a handful of clever people”, In this book, Hogben outlines why the general public should have a basic knowledge of the world around them that can reassure them that world leaders are doing the right things when it comes to the earth, and that they aren’t “pulling the wool over the public’s eyes”.

Allan concluded with a question to the audience about science journalism: “Whose side are WE on?” We should, he said, continue to push for our own accountability, as well as trust from the general public, when it comes to scientific news stories.

The Future of Media is Weirder Than You Think

Matt Waite, professor of Practice in the College of Journalism and Mass Communications at the University of Nebraska-Lincoln, began by saying that he believes that the future of media is going to be weirder than we all think, Smartphones are becoming boring because everyone has them, and media companies must become more creative. Inevitably, media will be everywhere.

What we think is coming in the future, is already here, Waite said; it just isn’t evenly distributed yet. For example, refrigerators are available to buy right now that have touchscreens built into their doors that can tell consumers when they need to buy more of a certain food item and let them order it online directly from the fridge screen. Our question, as journalists, is: how do we reach the “fridge audience”? In another of his examples, a bathroom mirror has a built-in touchscreen that can show you the weather, top current news stories, and your daily schedule – all whilst you are brushing your teeth in the morning.

One of the main points for all three speakers throughout this session is that technological advancements are a good thing but that journalism therefore has to adapt to ensure that accurate, interesting stories reach as many people as possible.


Virtual and augmented reality applications in immersive science journalism

Presentations made available from this session

Download presentation by Joshua Hatch, 2017-18 Knight Science Journalism Fellow at MIT, Assistant Managing Editor for Data and Interactives at the Chronicle of Higher Education, former president of Online News Association, US.


Fact Checking

Session Review by Mihai Andrei

Call it post-truth, call it fake news, call it lies and manipulation -- the world is going through a truthfulness crisis, and we're seeing the effects all around us. From Brexit and Trump to anti-vaccination proponents and climate change denial, it's easy to find examples where lies have come out on top and swayed public opinion.

Helping truth get its pants on

Not all lies are the same, says Mevan Babakar from Fullfact, a UK independent fact-checking organisation. Speaking at a session at the 2018 UKCSJ, she explains that disinformation can vary from essentially benign to extremely dangerous. It can cause economic harm, disengage people from democracy, and even pose risks to life.

They say that a lie goes halfway around the world before the truth puts its pants on, but Mevan and her colleagues at Fullfact are trying to change that. Fact-checking is the natural enemy of disinformation, but timing is crucial. Social media has accelerated how information propagates -- be it accurate or not.

A great example than that is the 2016 story about how Pope Francis allegedly endorsed Donald Trump. The story was initially published on a website called WTOE 5 News which has since shut down, but which, according to CNBC, used to have a telling disclaimer on its homepage: "Most articles on wt0e5news.com are satire or pure fantasy." That article was picked up by another popular website called Ending the Fed, where it gathered almost 1,000,000 Facebook interactions. Eventually, Pope Francis had a press conference in which he debunked this story. "I never say a word about electoral campaigns," he explained -- but it was already too late. The story about his press conference received much less traction, and presumably, a large number of voters were left under the impression that the Pope endorsed Trump.

This isn't an isolated case; in fact, Ending the Fed's four most popular stories at the time were all fake and yet generated 2,953,000 Facebook engagements in the events leading up to the US Election Day.

Unfortunately, fact checking takes time and effort, and it's nearly impossible for fact checkers to keep up with the lies. This is where machine learning enters the stage.

Technology to the rescue

Aside from "manual" fact-checking, Babakar and colleagues are developing an Artificial Intelligence (AI) algorithm that works in real time, analysing claims as long as reliable data is available. This could be used in political debates, press conferences, or even integrated into social media.

At the same event, FactMata's Lusine Mehrabyan explained that her organization uses AI in a different way: connecting experts with dubious claims, attempting to confirm or debunk them.

But both agree on one thing: this is not something that technology alone can solve.

A fundamentally human problem

Artificial intelligence can be of great help and offer some much-needed progress, but at the end of the day, this is a human issue we're dealing with, and we can't expect technology to solve that.

Andreas Vlachos, who studies and teaches machine learning at Cambridge University, says that while automation can increase the productivity of fact checkers, there's no replacement for the human input.

There's also no replacement for critical thinking and reading. If we really want to stave off this influx of disinformation, the panel agreed, it's up to each and every one of us to play our part and look things up -- and when it comes to that, there's one place most people turn to.

Wikipedia

Wikipedia gets 7.4 billion hits every month, but that doesn't even begin to tell the story of how important it is. As students from all around the world can attest, Wikipedia is a godsend for obtaining information.

According to Miriam Redi, a researcher at the Wikimedia Foundation, the organization is taking its reliability very seriously and is using AI to develop a better, more thorough, and easily accessible citation system. Given the huge amount of attention and trust invested in Wikipedia, that's a very important step.

In the end, the panel members were telling the same story: we live in an increasingly interconnected world, and if we want to prevent the spread of misinformation and its nefarious consequences, we need to dig deeper and choose our information sources more carefully. Fake news is a symptom, and the cure lies within all of us.

Presentations made available from this session

Download presentation by Andreas Vlachos, senior lecturer at the Natural Language and Information Processing group at the Department of Computer Science and Technology at the University of Cambridge


Data Visualisation and Presentation
Session Review by Robert Lea

Science reporters face a unique problem that may not arise in other areas of writing, how does one present data in an engaging, interesting and visually pleasing way with ‘turning-off’ a casual audience?

Data presentation was the subject of a panel at the recent UKCSJ2018 held at the Francis Crick Institute in London. Science journalists gathered to listen to representatives of some of the largest media outlets in the UK on how data is presented in a captivating way.

Why use data?

As science writers the use of data is utterly inescapable.

Joshua Hatch, Assistant managing editor of the Chronical of Higher Education, compares displaying data in reports and articles to “showing your working out” whilst in maths classes. Hatch says the use of data allows writers to customise their story, moulding it to the interests of a particular audience.

For the Chronicle this means that visualisations should be accessible enough that his readers can highlight their area of interest from a wider set of data. This way the reader can identify the specific data that concerns their area of interest.

Understand the limitations of your data

However, data sets can often be limited. It is vital that the science writer does not over extrapolate or draw incorrect conclusions from the data they are building a narrative around.

During the second World War planes returning from bombing runs were examined to discover which areas of their structure needed to be reinforced. Engineers noted that the majority of bullet-holes were found in the wings and tail sections. The conclusion – these areas that should be reinforced.

In fact, the available data was telling engineers the exact opposite. Very few planes were returning with holes in the fuselage because very few of those planes returned at all.

Get Interactive!

Interactivity is the key to personalising data sets. Thus a data presentation that allows a reader to input their own data to be taken to a conclusion or result unique to them is desirable, but requires interactivity with the visualisation.

Christine Jevans, a data journalist with BBC News, showcased an example of this technique.

The BBC’s Olympic Body match calculator allows users to input their own height and weight and find their closest match amongst athletes. The visualisation was so successful, it was retweeted by an Olympic athlete after she used it and received herself back as the result.

Jevans also presented a much more serious use of data visualisation and interaction.

The BBC’s ‘NHS trust tracker’ launched one-year-ago, allows users to trace how their local NHS trust (or any other in the Country for that matter) is doing in four key criteria.

With the help of this tool users are able to quickly search through a massive data-set which may otherwise be daunting. By presenting the result as a highlighted bar on a graph, it provides a visualisation of just how well the trust is performing against other trusts.

The programme is an example of how a well-designed visualisation can add context to a story.

Creative a narrative

One of the key ideas that was repeated by all three speakers on the panel was how important it is that data visualisation bolsters the narrative of the article it accompanies. As Hatch puts it, writers should always bear in mind the mantra “show, don’t tell”.

Catalin Barr, data projects editor at The Guardian empathised the importance of storytelling during her section of the pane’. She highlighted the deeply collaborative nature of data-driven writing.

Barr strongly believes that there is data potential in every story that we tell. She underlined both the collection of data “from the wild” and recommended taking unstructured data that exists within an organisation to find a story.

Barr refers to Phillip Meyers work in 1967 after the Detroit riots. In a method termed precision journalism, Meyer used social science methods such as examining survey-data to understand the demographics among the rioters.

In Summary

The clear message of the panel was the importance of being creative with the data we use, weaving narrative around it whilst also remembering that the data alone will not tell the story. As writers we are the guide, directing our audience to the information that will interest them.

Data visualisation is our map, often literally. Presenting data doesn’t require a degree in programming either. As Joshua Hatch pointed out early in the proceedings, simple tools, whatever gets the job done will do.  In fact, he says that sometimes presenting data can be as simple as grabbing a pen and paper to produce a simple diagram or graph.


Cooperation or Conflict: Creative tension between writers and editors in crafting feature-length stories

Session Review by Alexandru Micu

The relationship between writers and their editors is rarely a simple one. For example, every writer "has his sweethearts" embedded into the text, says former Wired features editor turned freelance journalist Oliver Franklin-Wallis, one of the panelists. They're part of each author’s voice -- and, as an editor, you'll sometimes have to "kill them" to make the work truly shine.

So what can authors and editors do to prevent butting heads? Mosaic editor Chrissie Giles, freelance environmental journalist and author Fred Pearce -- a former news editor for New Scientist, -- and Zeeya Merali, a freelance physics journalist and author, joined Oliver in that discussion. Here’s some of their insight from both perspectives on the process:

Editors aren't writers

Editors rarely get training. Generally speaking, however, editors shouldn’t think like writers, says Oliver Franklin-Wallis. They’re doing both themselves and the author a massive disservice if they’re re-writing the piece behind the author’s back. If a feature is good, they shouldn't try to fix it. If it's really bad, it's not worth fixing in the first place. They should see what works, what doesn't, and talk with the author -- but they should give him or her space to actually write the thing. After all, it's their name on the byline.

Trust goes a long way, both ways

If you are the author, it's easy to see your editor as someone who wants to silence your voice; they are, after all, usually the first to give you feedback on the story. It's their job to try and poke holes in the text and see what doesn't work. Editing is a lot like dating, says Chrissie Giles: you have to find a person you can trust, or it's not going to work out.

That trust is a two-way street. Be supportive as an editor, but critical when it's warranted. Fred Pearce confesses he “[loves] editors who find [his] mistakes,” so don’t hold back from having difficult conversations.

It goes a long way for writers to stick by their deadlines, Giles adds. The ability to write an interesting story, being generally pleasant to work with, and punctuality are the three things she looks for in an author.

It takes soft skills -- but too soft is bad

Writing a feature can be a very personal affair. Authors put a lot of themselves in their work and become quite emotionally invested by the end. Editors should consider that when editing, but shouldn’t overdo it, cautions Zeeya Merali. If the piece is getting better, too much micromanagement is a waste of everybody's time. As long as the editor communicates properly and keeps everybody up to speed, they don't need to ask permission for every single change to the text.

"Maybe if it’s a big overhaul, we’ll need to have a big talk about it," she says. "Otherwise, it's fine by me."

Editors have bosses, too

Sometimes, even though both parties working on a feature are happy with it, the editor hierarchy can cause trouble. Editors have their own bosses to answer to, and don't always have full liberty of decision on how a story is broached. In other cases, a story can bounce back and forth between different editors, each adding his own suggestions. It can be quite stressful for writers, who usually aren't privy to these behind the scenes dynamics. Generally speaking, however, if both the author and their first editor feel a feature is good, they can work together to see it published.


Sensor and drone journalism: how, why and when to use them

Download Presentation by John Mills, co-founder of Media Innovation Studio, The University of Central Lancashire's School of Journalism and Media, UK