Implicit trustworthiness assessment based on users’ reactions claims

September 15th, 2020 by

Online textual information has been increased tremendously over the years, leading to the demand of information verification. As a result, Natural Language Processing (NLP) research on tasks such as stance detection (Derczynski et al., 2017) and fact verification (Thorne et al., 2018) is gaining momentum, as an attempt to automatically identify misinformation over the social networks (e.g., Mastodon and Twitter).

To that end, within the scope of EUNOMIA a stance classification model was trained, which involves identifying the attitude of EUNOMIA-consent Mastodon users towards the truthfulness of the rumour they are discussing. In particular, transfer learning was applied to fine tune the RoBERTa (Robustly optimized BERT) model (Liu et al., 2019) using the public available dataset SemEval 2019 Subtask 7A (Gorrell et al., 2019). This dataset contains Twitter threads and each tweet (e.g., Hostage-taker in supermarket siege killed, reports say. #ParisAttacks –LINK) in the tree-structured thread is categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to (e.g., I’ve heard that also).
  • Deny: the author of the response denies the veracity of the rumour they are responding to (e.g., That’s a lie).
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to (e.g., Really?).
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to (e.g., True tragedy).

Our model achieved 85.1% accuracy and 62.75 % F1-score macro. Due to the fact that this dataset includes posts using arbitrary ways of language (e.g., OMG that aint right ) the obtained scores are not spectacular, but even so, our approach surpasses the state-of-the-art results (i.e., 81.79% accuracy and 61.87% F1-score) for this dataset  (Yang et al., 2019).

The service has been containerized and will be soon integrated with the rest of the EUNOMIA platform as another useful trustworthiness indicator for the users.

References

Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. SemEval@ACL.

Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., & Zubiaga, A. (2019). SemEval-2019 Task 7: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of SemEval. ACL.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.

Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. ArXiv, abs/1803.05355.

Yang, R., Xie, W., Liu, C., & Yu, D. (2019). BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation. SemEval@NAACL-HLT.

Journalists & Social Media – An impossible Love-Affair? The traditional media journalist’s view

September 11th, 2020 by

Journalists can be often accused of producing and spreading ‘fake news’; usually by those being in charge when the journalists are addressing “disagreeable” topics in relation to them.

In fact, all journalists – and ‘traditional journalists’ in particular – are trying to fight misinformation; especially those working for ’quality-media’ and public-service-media-institutions. Traditional journalists are rather obliged by their professional code of ethics and their institutions to do proper research. They have high ethical standards and rules to avoid introducing and spreading of misinformation as well as of information that can be misunderstood and misinterpreted.


Proper research takes a lot of time: checking facts and figures, and re-checking every bit of information is must in journalism. However, doing research often impedes the fast publication of breaking news, which can be a big competitive disadvantage. Digital sources can therefore often be seen as ideal for quick and successful journalistic research. Can social media be part of this “digital toolkit” for traditional journalists?


Although social media is seen as a medium that makes new trends and topics available in a very fast way, the quality of data and the level of biased and misguided information can be against the requirements of proper journalism. Therefore, traditional journalists might use social media platforms to pick up certain trends, but cannot rely on them as information sources. Information on social media is not always verified as well as the source of information that can be unknown. Furthermore, social media promote the creation of “filter bubbles” meaning that users tend to surround themselves with similar information. At the same time, social media platforms are an important ‘market-place’ to solicit traditional journalism ‘products’ as they reach new generation of readers and across their usual ‘circles’ e.g., beyond country borders. All in all, the increasing consumption of news on social media proves problematic for a traditional journalist.


EUNOMIA is the silver-lining promoting critical media literacy skills of social media users/ readers whereas, for traditional journalism, this solution has the potential of opening a new area of trustworthy information exchange. For the very first time, the users themselves are enabled to assess and vote the trustworthiness of social media information based on users’ driven indicators. For assessing trustworthiness, the source of information is considered a key indicator. Therefore, EUNOMIA’s toolkit will encourage users to provide sources thus, promoting the work of traditional media institutions as they are still seen as the most reliable source for important and proper information (Eurobarometer, 2017). Furthermore, through EUNOMIA, the traditional journalists will now be able to track each bit of information found in social media down to the very source, and they will be supported by the community in evaluating the quality and trustworthiness of a given information.


It’s not me, it’s you: The Third Person Effect

July 16th, 2020 by

A well-documented phenomenon in communication science is the fact that individuals seem to think that others are more susceptible to media effects than themselves. We tend to assume that commercials, misinformation, or other manipulation by and via the media affect ‘the others’ more than ourselves. This is called the Third Person Effect (Davison, 1983).

Because of this, individuals also tend to overestimate the influence of the media on the attitudes and behaviour of others, and underestimate the influence on themselves. Both of these facts can be harmful: when individuals underestimate the influence of misinformation on their own attitudes and behaviour, they may become more susceptible to it. And when they overestimate the effects on others, this might lead them to take action based on their expectation of other people’s reaction to it.

Expertise plays also a relevant role in this process, especially an individual’s own (perceived) expertise as compared to others’:

“In a sense, we are all experts on those subjects that matter to us, in that we have information not available to others people. This information may not be of factual or technical nature; it may have to do with our own experiences, likes, and dislikes. Other people, we reason, do not know what we know. Therefore, they are more likely to be influenced by the media.” (Davison, 1983, p. 9)

Another relevant factor are in-group and out-group effects. Researchers have observed differences between in-group and out-group members (Jang & Kim, 2018), depending on the idea of ‘the others’, e.g. as an individual, an average person, ‘people just like me’ etc. (Conners, 2005). Jang & Kim (2018) also reported a positive relationship between media literacy and third-person perception, stating that media literacy education could minimize the potential harm of false information. The third-person effect is also related to self-enhancing tendencies such as the belief that one has a greater ability to learn from an experience than others, which seem to be cross-cultural (Johansson, 2005).

In the context of misinformation, this is an important issue: if people do not think they are influenced by false information (as opposed to others), “they may develop the false impression that information shared among them, regardless of their actual accuracy, is perceived true.” (Jang & Kim, 2018, S. 299) Previous results from EUNOMIA co-design research showed that misinformation was claimed to be dangerous because it might have great power over the population and influence their acts and/or decisions.

Sources

Conners, J. L. (2005). Understanding the Third-Person Effect. Centre for the Study of Communication and Culture (CSCC).

Davison, W. P. (1983). The Third-Person Effect in Communication. The Public Opinion Quarterly, 47(1), 1-15.

Jang, S., & Kim, J. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295-302.

Johansson, B. (2005). The Third-Person Effect. Only a Media Perception? Nordicom Review, 26(1), 81-94.

Special Issue on Misinformation; Call for Papers

June 29th, 2020 by

EUNOMIA’s Ioannis Katakis along with Karl Aberer (EPFL), Quoc Viet Hung Nguyen (Griffith University) and Hongzhi Yin (The University of Queensland) are editing a special issue on misinformation on the web that will be published in the Journal of Information Systems, one of the top-tier journals in Databases and Data-Driven Applications. This special issue seeks high-quality and original papers that advance the concepts, methods, and theories of misinformation detection as well as address the mechanisms, strategies and techniques for misinformation interventions. Topics include:

  • Fake news, social bots, misinformation, and disinformation on social data
  • Misinformation, opinion dynamics and polarization in social data
  • Online misbehavior (scams, deception, and click-bait) and its relation to misinformation
  • Information/Misinformation diffusion
  • Credibility and reputation of news sources, social data, and crowdsourced data

and many more.


The timeline of the special issue is the following:

Submission: 1st August 2020

First Round Notification: 1st October 2020

First Round Revisions: 1st December 2020

Second Round Notification: 1st February 2021

Final Submission: 1st March 2021

Publication: second quarter, 2021Please find more information or submit your paper here:
https://www.journals.elsevier.com/information-systems/call-for-papers/special-issue-on-misinformation-on-the-web
GL

Challenging Misinformation: Exploring Limits and Approaches

June 23rd, 2020 by

We invite researchers and practitioners interested in aligning social, computational and technological approaches to mitigating the challenge of misinformation in social media to join us at the SocInfo 2020 Workshop!

The last weeks and months of the COVID-19 pandemic have shown quite drastically how access to accurate information is crucial. So many rumours, hoaxes, fake news and other forms of dis- and misinformation have been spread that the term ‘infodemic’ was coined. To counteract this wicked problem, tools and online services are continuously being developed to support different stakeholders. However, limiting the harm caused by misinformation requires merging multiple perspectives, including an understanding of human behaviour in judging and promoting false information, as well as technical solutions to detect and stop its propagation. To this point, all the solutions are notably limited: as a multi-layered problem, there is not a single and comprehensive solution capable of stopping misinformation. Existing approaches are all limited for different reasons: the way end-users are (or not) engaged, limited data sources, subjectivity associated with the judgment, etc. Furthermore, purely technical solutions tend to disregard the social structures that lead to the spread of misinformation, or the different ways societal groups are affected are not only limited, they can also be harmful by obfuscating the problem. Therefore, a comprehensive and interdisciplinary approach is needed.

So, how can we battle misinformation and flatten the curve of the infodemic? Members of the EU funded projects Co-inform and EUNOMIA have joined forces to organise a hands-on workshop at the SocInfo conference on 6 October 2020. Aim of the workshop is to unpack the state-of-the-art on social sciences and technical solutions, challenging the participants to critically reflect upon existing limitations and then co-create a future with integrating perspectives. This workshop intends to bring together data scientists, media specialists, computer scientists, social scientists, designers, and journalists. Participants are invited to discuss challenges and obstacles related to misinformation online from human and technical perspectives; to challenge existing approaches and identify their limitations in technical terms and targeted users, and to co-create future scenarios building on existing solutions.

Find out more

About the workshop, topics of interest, the submission process, and the organisers: http://events.kmi.open.ac.uk/misinformation/

About SocInfo 2020: https://kdd.isti.cnr.it/socinfo2020/

About Co-inform: https://coinform.eu/

Eat, Sleep, Trust, Repeat or The Illusory Truth Effect

June 23rd, 2020 by

Have you ever noticed how a vaguely familiar statement, something you remember hearing, makes you think “there has to be something to it” or “this has to be true, I have heard it before”? This is what social psychology calls the “illusory truth effect”, which means that a person attributes higher credibility and trustworthiness to information to which they have been exposed before. In a nutshell, repetition makes information more credible and trustworthy. Frequency, it seems, serves as a “criterion of certitude”.

What is really interesting about this is that this effect has not only been tested for plausible information but even for statements initially identified as false. And the effect is immediate: already reading a false statement once is enough to increase later perceptions of its accuracy. An experiment carried out by researchers in the US showed the truth effect even for highly implausible, entirely fabricated news stories. The researchers further were able to show that the effect holds true even if participants forgot having seen the information previously. Even if participants disagree with information, repetition made it more plausible.

But why does this effect exist? Research gives us several answers to this. First, familiarity with the information leads to faster processing of that information. Second, recognition of information, and coherency of statements with previous information in a person’s memory, affect our judgement. Third, there is a learned effect that faster processing of information and truthfulness of information can be positively correlated.

When it comes to misinformation and fake news, the importance of this effect cannot be stressed enough. Even warnings by fact-checkers or experts cannot counter it; the only way of protecting yourself and others from misinformation is to avoid sharing any piece of dubious information. So, one of the most important rules we identified as part of our guidelines for information hygiene is simply if in doubt, don’t share.

References

Bacon, F. (1979). Credibility of repeated statements: Memory for trivia. Journal of Experimental Psychology: Human Learning and Memory, 5(3), 241–252.

Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of verbal learning and verbal behavior, 16(1), 107-112.

Pennycook, G., & Rand, D. (2018). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of personality.

Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of experimental psychology: general., 147(12), 1865–1880. doi:https://doi.org/10.1037/xge0000465

Unkelbach, C. (2007). Reversing the Truth Effect: Learning the Interpretation of Processing Fluency in Judgments of Truth. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(1), 2019-230.

Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019). Truth by Repetition: Explanations and Implications. urrent Directions in Psychological Science, 28(3), 247-253.