Fake News

Google plans to show ads that educate people about disinformation techniques (Fake News), following a successful experiment by Cambridge University. Google Jigsaw, which tackles online security dangers, will run adverts on YouTube, TikTok, Twitter and Facebook.

Researchers found the videos improved people’s ability to recognise manipulative content. They will be shown in Slovakia, the Czech Republic and Poland to combat fake news about Ukrainian refugees.

Google said the “exciting” findings showed how social media can actively pre-empt the spread of disinformation.

The research was founded on a developing area of study called “prebunking”, which investigates how disinformation can be debunked by showing people how it works – before they are exposed to it.

In the experiment, the ads were shown to 5.4 million people, 22,000 of whom were surveyed afterwards.

After watching the explanatory videos, researchers found:

>>> an improvement in respondents’ ability to spot disinformation techniques
>>> an increased ability to discern trustworthy from untrustworthy content
>>> an improved ability to decide whether or not to share content

The peer-reviewed research was conducted in conjunction with Google, which owns YouTube, and will be published in the journal Science Advances.

Beth Goldberg, head of research and development for Google Jigsaw, called the findings “exciting. They demonstrate that we can scale prebunking far and wide, using ads as a vehicle,” she said.

Jon Roozenbeek, the lead author on the paper, told the BBC the research is about “reducing the probability someone is persuaded by misinformation”.

“Obviously you can’t predict every single example of misinformation that’s going to go viral,” he said. “But what you can do is find common patterns and tropes. “The idea behind this study was – if we find a couple of these tropes, is it possible to make people more resilient against them, even in content they’ve never seen before?”

The scientists initially tested the videos with members of the public under controlled-conditions in a lab, before showing them to millions of users on YouTube, as part of a broader field study.

The anti-misinformation campaign and pre-bunking campaign was run on YouTube “as it would look in the real world”, Mr Roozenbeek said. “We ran them as YouTube ads – just like an ad about shaving cream or whatever… before your video plays,” he explained.

Cambridge University said this was the first real-world field study of ‘inoculation theory’ on a social media platform.

Professor Sander van der Linden, who co-authored the study, said the research results were sufficient to take the concept of inoculation forward and scale it up, to potentially reach “hundreds of millions” of social media users.

“Clearly it’s important for kids to learn how to do lateral reading and check the veracity of sources,” he said, “but we also need solutions that can be scaled on social media and interface with their algorithms.”

He acknowledged the scepticism around technology firms using this type of research, and the broader scepticism around industry-academia collaborations.

“But, at the end of the day, we have to face reality, in that social media companies control much of the flow of information online. So in order to protect people, we have come up with independent, evidence-based solutions that social media companies can actually implement on their platforms.”

“To me, leaving social media companies to their own devices is not going to generate the type of solutions that empower people to discern misinformation that spreads on their platforms.”