Researchers at Griffith University are building the first real-time computer-generated system to detect fake news.
Fake news has the power to shape people’s perceptions and can be quickly weaponised by bad actors. It has been credited to swaying public opinions during elections, such as the recent Federal Election
Death Tax hoax
on Facebook.
On social media, fake news is often removed by a bottom up approach where users flag content to moderators who delete the posts. Facebook demotes flagged material rather than deleting it, allowing suspicious posts to remain seen and shareable but with impediments like alerts and warnings in place.
Dr Henry Nguyen, a researcher at Griffith University leading the study, says that fake news is heavily increasing.
“Our modern society is struggling with an unprecedented amount of online fake news, which harms democracy, economics, and national security,” Dr Nguyen said.
In a
report
by online activist non-profit AVAAS, political fake news was found to have received over 158 million estimated views, ‘enough to reach every reported registered voter in the US at least once.’
The technology will allow for timely alerts of fake news dissemination for the public and may help protect manufacturers from agenda-based attacks. Dr Nguyen believes this may help restore greater trust in journalism.
“Creators of fake news optimise their chance to manipulate public opinion and maximise their financial and political gains through sophisticated pollution of our information diffusion channels.
“Such attacks are driven by the advances of modern artificial intelligence and pose a new and ever-evolving cyber threat operating at the information level, which is far more advanced than traditional cybersecurity attacks at the hardware and software levels,” Nguyen said.
Another Queensland University, The University of Queensland, is also experimenting with new ways to crack down on fake news with a human-in-the-loop approach.
For the project, researchers are creating training sets for AI, using online data they have collected over years on the ways people perceive content, as well as how biases and stereotypes might play a role in their perception.
Associate Professor Gianluca Demartini, the project lead, says that AI is not yet
developed enough
to decipher fake news on its own and requires ‘high levels of human involvement.’
“Our approach combines AI’s ability to process large amounts of data with humans’ ability to understand digital content. This is a targeted solution to fake news on Facebook, given its massive scale and subjective interpretation,” Demartini wrote in a piece on
The Conversation.
Demartini also wants to use the database of human perceptions to learn how to train the general public to wise up on false news using a series of online tasks or games.
Internationally, researchers are exploring how AI can help with the problem.
In a
European Journal of Operational Research article
to be published in December, American Researchers at Auburn University and the University of Massachusetts used real news stories and fake new stories to build algorithms able to detect and filter through fake news; they called their model FEND (FakE News Detection.)
The proposal specified two phases to locate discover fake news.
First, trustworthy news was categorised into clusters based around common new topics.
Second, fake news would be detected if it was an outlier amongst the cluster or if the similarity between the news events and the cluster drops beneath a specified threshold.
This was a text analytic driven approach, other past approaches have sentiment and syntax analysis systems which the researchers deemed inadequate for the massive amounts of data online, as these approaches work better with special data types.
In the study’s conclusion, researchers call for a real-time pre-processing model which their proposal lacks, believing that cutting down fake news as it is emerges will make the process more effective.
According to the study, the FEND approach achieved 92.49% classification accuracy and 94.16% recall.