Why Fake News Goes Viral: Science Explains
People's limited attention spans, plus the sheer overload of information on social media may combine to make fake news and hoaxes go viral, according to a new study.
Understanding why and how fake news spreads may one day help researchers develop tools to combat its spread, the researchers said.
For example, the new research points toward curbing the use of social bots — computer programs that automatically generate messages such as tweets that inundate social media with low-quality information — to prevent the spread of misinformation, the researchers said. [Our Favorite Urban Legends Debunked]
However, "Detecting social bots is a very challenging task," said study co-author Filippo Menczer, a professor of informatics and computer science at the Indiana University School of Informatics and Computing.
Previous research has shown that some of people's cognitive processes may help to perpetuate the spread of misinformation such as fake news and hoaxes, according to the study, published today (June 26) in the journal Nature Human Behavior. For example, people tend to show "confirmation bias" and pay attention to and share only the information that is in line with their beliefs, while discarding information that is not in line with their beliefs. Studies show that people do this even if the information that confirms their beliefs is false.
In the new study, the researchers looked at some other potential mechanisms that may be at play in spreading misinformation. The researchers developed a computer model of meme sharing to see how individual attention and the information load that social media users are exposed to affect the popularity of low-quality versus high-quality memes. The researchers considered memes to be of higher quality if they were more original, had beautiful photos or made a claim that was true.
The investigators found that low- and high-quality memes were equally likely to be shared because social media users' attention is finite and people are simply too overloaded with information to be able to discriminate between low- and high-quality memes. This finding explains why poor-quality information such as fake news is still likely to spread despite its low quality, the researchers said.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
One way to help people better discriminate between low- and high-quality information on social media would be to reduce the extent of information load that they are exposed to, the researchers said. One key way to do so could involve decreasing the volume of social media posts created by social bots that amplify information that is often false and misleading, Menczer said.
Social bots can act as followers on social media sites like Twitter, or they can be run as fake social media accounts that have their own followers. The bots can imitate human behavior online and generate their own online personas that can in turn influence real, human users of social media. [25 Medical Myths that Just Won't Go Away]
"Huge numbers" of these bots can be managed via special software, Menczer said.
"If social media platforms were able to detect and suspend deceptive social bots … there would be less low-quality information in the system to crowd out high-quality information," he told Live Science.
However, both detecting and suspending such bots is challenging, he said. Although machine-learning systems for detecting social bots exist, these systems are not always accurate. Social media platforms have to be conservative when using such systems, because the cost of a false positive error — in other words, suspending a legitimate account — is generally much higher than that of missing a bot, Menczer said.
More research is needed to design fast and more accurate social bot detection systems, he said.
Originally published on Live Science.