#Infohygiene: Be wary of language that is making you feel emotional. It is designed to become viral, not to inform

June 20th, 2020 by

For misinformation to have an impact, it needs to go viral. So, it is not surprising that misinformation shares a lot with clickbait and often aims at nothing more than just that. To bait to click.

Several studies have shown that emotionally arousing stories tend to attract audience selection and exposure. There is no doubt that emotionally evocative content is more ‘viral’ than neutral content. The more the anger or anxiety it evokes, the faster and more broadly it spreads. There is also a lot of evidence that emotions impact memories. Emotional memories are vivid and lasting but not necessarily accurate, and under some conditions, emotion even increases people’s susceptibility to false memories.

More directly on misinformation, a study by Brian Weeks of the University of Michigan demonstrated that anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. Another study by Northeastern University involving 5,303 posts with 2,614,374 user comments from popular social media platforms, showed more misinformation-awareness signals and extensive emoji and swear word usage with false posts. Misinformation often uses inflammatory and sensational language to alter people’s emotions.

So, what can one do with this knowledge? A good approach is to make sure you take a moment to think when you encounter highly emotive language. An even better way may be to use EUNOMIA’s information cascade functionality, which visualises the sentiment expressed by all the posts that contain the same information. Language that is highly negative is not by itself a sign of mal-intent, but can be one if combined with other indicators.


Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of communication, 65(4), 699-719.

Jiang, S. and Wilson, C. (2018). Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-23.

#Infohygiene: Be wary of popular posts

June 18th, 2020 by

Misinformation travels faster than reliable information. This has been shown time and time again. For example, in their work published in Science two years ago, M.I.T. researchers investigated around 126,000 stories tweeted by around 3 million people and classified news as true or false using information from six independent fact-checking organisations. They showed that falsehood diffused significantly faster than the truth in all categories of information”. Specifically, “it took the truth about six times as long as falsehood to reach 1500 people” and “20 times as long as falsehood to reach a cascade depth of 10”.

In contrast, posts from individuals or organisations that are experts in a topic are not necessarily popular in social media. For example, when researchers analysed the content and source of the most popular tweets for a case of diphtheria in Spain, none of the popular tweets were posted by healthcare organisations. They were mainly opinions from non-experts.

This by no means indicates that a viral post is misinformation just because it is viral, but it certainly is a reason to think twice before reposting.

Vosoughi, S., Roy, D. and Aral, S., 2018. The spread of true and false news online. Science, 359(6380), pp.1146-1151.
Porat, T., Garaizar, P., Ferrero, M., Jones, H., Ashworth, M. and Vadillo, M.A., 2019. Content and source analysis of popular tweets following a recent case of diphtheria in Spain. European journal of public health, 29(1), pp.117-122.

#Infohygiene: “Be cautious of information forwarded to you through your network”

June 13th, 2020 by

“Was the information forwarded to you?” is a common recommendation for protecting oneself against misinformation. The rationale is that one needs to question their trust network; more specifically to refrain from letting their guard down just because a piece of news came from a friend. While friends forwarding news may be generally trusted and have no ill intent, this does not mean that they have not themselves been deceived by information that is mal-intentioned and biased.

Kang and Sunder have explained that when reading online news, the closest source is often one of our friends. Because we tend to trust our friends, our cognitive filters weaken, making a social media feed fertile ground for fake news to sneak into our consciousness. Their experiment with 146 participants showed that people are less sceptical of information they encounter on platforms they have personalised through friend requests and “liked” pages, and do not question the credibility of the source of news when they think of their friends as the source.

To this, also add the study comparing the dynamics of the spreading of unverified rumours and scientific news on Facebook by Vicario et al., who observed that most of the times, unverified rumours were taken on by a friend having the same profile, i.e., belonging to the same “echo chamber”.

H. Kang and S. S. Sunder. “When Self Is the Source: Effects of Media Customization on Message Processing”, Journal of Media Psychology, 19(4), pp.561-588, 2016.
M. D. Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H. E. Stanley, and W. Quattrociocchi. The spreading of misinformation online, Proceedings of the National Academy of Sciences of United States of America (PNAS), 2016, 113 (3): 554-559.