#Infohygiene: Be wary of language that is making you feel emotional. It is designed to become viral, not to inform

June 20th, 2020 by

For misinformation to have an impact, it needs to go viral. So, it is not surprising that misinformation shares a lot with clickbait and often aims at nothing more than just that. To bait to click.

Several studies have shown that emotionally arousing stories tend to attract audience selection and exposure. There is no doubt that emotionally evocative content is more ‘viral’ than neutral content. The more the anger or anxiety it evokes, the faster and more broadly it spreads. There is also a lot of evidence that emotions impact memories. Emotional memories are vivid and lasting but not necessarily accurate, and under some conditions, emotion even increases people’s susceptibility to false memories.

More directly on misinformation, a study by Brian Weeks of the University of Michigan demonstrated that anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. Another study by Northeastern University involving 5,303 posts with 2,614,374 user comments from popular social media platforms, showed more misinformation-awareness signals and extensive emoji and swear word usage with false posts. Misinformation often uses inflammatory and sensational language to alter people’s emotions.

So, what can one do with this knowledge? A good approach is to make sure you take a moment to think when you encounter highly emotive language. An even better way may be to use EUNOMIA’s information cascade functionality, which visualises the sentiment expressed by all the posts that contain the same information. Language that is highly negative is not by itself a sign of mal-intent, but can be one if combined with other indicators.

References

Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of communication, 65(4), 699-719.

Jiang, S. and Wilson, C. (2018). Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-23.

#Infohygiene: Be wary of popular posts

June 18th, 2020 by

Misinformation travels faster than reliable information. This has been shown time and time again. For example, in their work published in Science two years ago, M.I.T. researchers investigated around 126,000 stories tweeted by around 3 million people and classified news as true or false using information from six independent fact-checking organisations. They showed that falsehood diffused significantly faster than the truth in all categories of information”. Specifically, “it took the truth about six times as long as falsehood to reach 1500 people” and “20 times as long as falsehood to reach a cascade depth of 10”.

In contrast, posts from individuals or organisations that are experts in a topic are not necessarily popular in social media. For example, when researchers analysed the content and source of the most popular tweets for a case of diphtheria in Spain, none of the popular tweets were posted by healthcare organisations. They were mainly opinions from non-experts.

This by no means indicates that a viral post is misinformation just because it is viral, but it certainly is a reason to think twice before reposting.

References
Vosoughi, S., Roy, D. and Aral, S., 2018. The spread of true and false news online. Science, 359(6380), pp.1146-1151.
Porat, T., Garaizar, P., Ferrero, M., Jones, H., Ashworth, M. and Vadillo, M.A., 2019. Content and source analysis of popular tweets following a recent case of diphtheria in Spain. European journal of public health, 29(1), pp.117-122.

#Infohygiene: “Be cautious of information forwarded to you through your network”

June 13th, 2020 by

“Was the information forwarded to you?” is a common recommendation for protecting oneself against misinformation. The rationale is that one needs to question their trust network; more specifically to refrain from letting their guard down just because a piece of news came from a friend. While friends forwarding news may be generally trusted and have no ill intent, this does not mean that they have not themselves been deceived by information that is mal-intentioned and biased.

Kang and Sunder have explained that when reading online news, the closest source is often one of our friends. Because we tend to trust our friends, our cognitive filters weaken, making a social media feed fertile ground for fake news to sneak into our consciousness. Their experiment with 146 participants showed that people are less sceptical of information they encounter on platforms they have personalised through friend requests and “liked” pages, and do not question the credibility of the source of news when they think of their friends as the source.

To this, also add the study comparing the dynamics of the spreading of unverified rumours and scientific news on Facebook by Vicario et al., who observed that most of the times, unverified rumours were taken on by a friend having the same profile, i.e., belonging to the same “echo chamber”.

References
H. Kang and S. S. Sunder. “When Self Is the Source: Effects of Media Customization on Message Processing”, Journal of Media Psychology, 19(4), pp.561-588, 2016.
M. D. Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H. E. Stanley, and W. Quattrociocchi. The spreading of misinformation online, Proceedings of the National Academy of Sciences of United States of America (PNAS), 2016, 113 (3): 554-559.

New paper accepted: “A prototype deep learning paraphrase identification service for discovering information cascades in social networks”

June 12th, 2020 by

Our paper will be presented at IEEE International Conference on Multimedia and Expo (ICME):

Kasnesis, P., Heartfield, R., Toumanidis, L., Liang, X., Loukas, G. and Patrikakis, C.Z., 2020. A prototype deep learning paraphrase identification service for discovering information cascades in social networks. IEEE ICME, London, 6-10 July 2020.

Its abstract: “Identifying the provenance of information posted on social media and how this information may have changed over time can be very helpful in assessing its trustworthiness. Here, we introduce a novel mechanism for discovering “post-based” information cascades, including the earliest relevant post and how its information has evolved over subsequent posts. Our prototype leverages multiple innovations in the combination of dynamic data sub-sampling and multiple natural language processing and analysis techniques, benefiting from deep learning architectures. We evaluate its performance on EMTD, a dataset that we have generated from our private experimental instance of the decentralised social network Mastodon, as well as the benchmark Microsoft Research Paraphrase Corpus, reporting no errors in sub-sampling based on clustering, and an average accuracy of 92% and F1 score of 93% for paraphrase identification.”

Workshop: Fighting fake news. Ways to enhance accountability, reliability and accuracy of Social Media information

March 31st, 2020 by

Workshop: Fighting fake news. Ways to enhance accountability, reliability and accuracy of Social Media information

Call for Papers

During the last decade, there has been a revolution in how people interconnect and socialize. From the early days of Facebook to today’s proliferation of Social Media of all types, people have been embracing this new form of socialization. Social networks, media and platforms are becoming the primary way in how our societies operate for the purposes of communication, information exchange, conducting business, co-creation, and learning. However, their extreme growth in combination with the lack of control over the digital content being published and shared, has led to their information veracity being heavily disputed.

As blatant fake news cases are becoming countless, motives for their spreading are often financial or political. In a recent letter, Sir Tim Berners-Lee, the inventor of the World Wide Web, specifically points out the alarming situation where most people today find news and information on the web through just a handful of social media sites and search engines. These sites use fake news as a tool to artificially grow their traffic, in order to take advantage of increased advertising revenues. They choose what to show based on algorithms that learn from our personal data, which they are constantly harvesting. The net result is that these sites show content they think we will click on –meaning that misinformation or fake news which is surprising, shocking, or designed to appeal to our biases, spread quickly. In the Freedom of the Net 2017 report, Freedom House is led to the same conclusion. The report studied 65 countries worldwide between June 2016 and May 2017 and found out that online manipulation and disinformation tactics played an important role in elections in at least 18 out of 65 countries during this period, including the United States.

Establishing synergies with innovative information and communication technologies (such as semantic analysis tools, blockchains, emotional descriptors, machine learning) can enhance the accountability, reliability and accuracy of the information being shared in Social Media, leading to a more veritable sociality. Key to this situation is to safeguard the distributed and open nature of Social Media, strengthening pluralism and participation, and mitigating censorship. At the same time, what is and what is not fake news is rarely straightforward. Users cannot leave such decisions to third parties like fact checkers or computer algorithms. A more mature approach to evaluating themselves and sharing information they read online can dramatically halt the main advantage of fake news, which is their speed of spreading.

In the context of the above, this workshop invites papers in the areas of:

  • innovative ICT technologies to fight against spreading of fake news
  • digital content verification
  • distributed trust and reputation establishment in decentralized environments
  • the role of machine learning both in causing and in tackling disinformation online
  • blockchain technologies to support accountability and transparency
  • human factors in social media disinformation
  • involvement of media specialists and user communities in the content verification process
  • ethics in social media disinformation
  • information veracity in the web and social media ecosystems

The workshop is co-organized by the H2020 EUNOMIA project and the H2020 SocialTruth project.

Workshop organizers:

Dr. Konstantinos Demestichas, ICCS/NTUA, Athens, Greece, email: cdemest@cn.ntua.gr

Prof. George Loukas, University of Greenwich, UK, email: g.loukas@gre.ac.uk  

Prof. Charalampos Z. Patrikakis, University of West Attica, email: bpatr@uniwa.gr

Dr. Evgenia Adamopoulou, Hellenic Open University, Patras, Greece, email: evgenia.adamopoulou@ac.eap.gr

Important Dates

Full paper submission deadline: 18 August 2019

Notification of decision: 14 September 2019

Camera-ready deadline: 22 September 2019

Instructions for Authors

Submitted papers must not substantially overlap with papers that have been published or that have been submitted to a journal or a conference with proceedings. Papers should be at most 15 pages long, including the bibliography and well-marked appendices, and should follow the LNCS style (https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines). Submissions are to be made to the submission web site at https://easychair.org/conferences/?conf=edemocracy2019. Only pdf files will be accepted. Submissions not meeting these guidelines risk to be rejected without consideration of their merits. The deadline for submitting papers is 18 August 2019 (11:59 p.m. American Samoa time).

The authors of accepted papers must guarantee that their papers will be presented at the conference.