Implicit trustworthiness assessment based on users’ reactions claims

September 15th, 2020 by

Online textual information has been increased tremendously over the years, leading to the demand of information verification. As a result, Natural Language Processing (NLP) research on tasks such as stance detection (Derczynski et al., 2017) and fact verification (Thorne et al., 2018) is gaining momentum, as an attempt to automatically identify misinformation over the social networks (e.g., Mastodon and Twitter).

To that end, within the scope of EUNOMIA a stance classification model was trained, which involves identifying the attitude of EUNOMIA-consent Mastodon users towards the truthfulness of the rumour they are discussing. In particular, transfer learning was applied to fine tune the RoBERTa (Robustly optimized BERT) model (Liu et al., 2019) using the public available dataset SemEval 2019 Subtask 7A (Gorrell et al., 2019). This dataset contains Twitter threads and each tweet (e.g., Hostage-taker in supermarket siege killed, reports say. #ParisAttacks –LINK) in the tree-structured thread is categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to (e.g., I’ve heard that also).
  • Deny: the author of the response denies the veracity of the rumour they are responding to (e.g., That’s a lie).
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to (e.g., Really?).
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to (e.g., True tragedy).

Our model achieved 85.1% accuracy and 62.75 % F1-score macro. Due to the fact that this dataset includes posts using arbitrary ways of language (e.g., OMG that aint right ) the obtained scores are not spectacular, but even so, our approach surpasses the state-of-the-art results (i.e., 81.79% accuracy and 61.87% F1-score) for this dataset  (Yang et al., 2019).

The service has been containerized and will be soon integrated with the rest of the EUNOMIA platform as another useful trustworthiness indicator for the users.


Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. SemEval@ACL.

Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., & Zubiaga, A. (2019). SemEval-2019 Task 7: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of SemEval. ACL.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.

Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. ArXiv, abs/1803.05355.

Yang, R., Xie, W., Liu, C., & Yu, D. (2019). BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation. SemEval@NAACL-HLT.

Me, myself and I: Filter Bubbles & Echo Chambers

September 11th, 2020 by

‘Filter bubbles’ and ‘echo chambers’ are popular terms to describe the phenomenon social scientists call ‘selective exposure’. The theory of selective exposure (Klapper, 1957) in brief states that people tend to select information which are in accord with their existing likes, and consequently avoid information that contradicts their beliefs and values.

Different digital tools, algorithms and behaviours rely on the collection of personal data to filter and/or rank items in the daily information stream, creating ‘filter bubbles’ and ‘echo chambers’. As a consequence, they result in a higher personalisation, but also a decreasing diversity of information. Diversity of information may refer to either source or content. Source diversity means the inclusion of a multitude of information sources by a news outlet, as well as the variety of news outlets consumed by a recipient. Content diversity means the range of topics and perspectives on a given topic (Haim, Graefe, & Brosius, 2018).

Despite describing similar phenomena, ‘filter bubbles’ and ‘echo chambers’ are not the same concept. ‘Echo chambers’, on the one hand, describe the phenomenon of being surrounded by link-minded contacts. This might lead to an amplification or reinforcement of pre-existing beliefs. ‘Filter bubbles’, on the other hand, refer to the algorithmic filtering of information to match a user’s needs (Haim, Graefe, & Brosius, 2018). However, there is no consistency in the use of both terms; for example, Lewandowsky et al (2017) describe ‘echo chambers’ as the space where “most available information conforms to pre-existing attitudes and biases” (p. 359).

Studies have shown that people are more likely to share articles with which they agree (An, Quercia, Cha, Gummadi, & Crowcroft, 2014) and that social media expose the community to a narrower range of information sources, compared to a baseline of information seeking activities. Research has also shown that the diversity of social media communication is significantly lower than the one of interpersonal communication, both on an individual and collective level (Nikolov, Oliveira, Flammini, & Menczer, 2015).

But why do people surround themselves with like-minded contacts, why do they choose information that confirms what they already believe? There are different answers to this question. The theory of cognitive dissonance (Festinger, 1957) explains this phenomenon arguing that individual strives for consistency (or consonance) of their believes, attitudes, knowledge etc. Inconsistencies cause psychological discomfort, which Festinger calls dissonance. Another answer is that surrounding oneself with familiar information helps to cope with or even overcome information overload (Pariser 2011). A third answer refers to the social aspect of social media: because of its sharing mechanisms, discovering information becomes a social endeavour rather than an individual process.

In the context of social media, ‘filter bubbles’ and ‘echo chambers’ therefore allow users to avoid psychological discomfort and information overload and to engage in a social endeavour in the process of information seeking. However, they pose great risks leading to self-reinforcement and reduced information diversity (Haim, Graefe, & Brosius, 2018). The tendency to surround oneself with like-minded opinions might also prevent an engagement with other ideas. This can facilitate confirmation bias and polarisation (Nikolov, Oliveira, Flammini, & Menczer, 2015; Haim, Graefe, & Brosius, 2018).

But it’s not all bad news: ‘echo chambers’ seem to be focused mainly on political discourse (Nikolov, Oliveira, Flammini, & Menczer, 2015) whereas other topic areas are less affected. Furthermore, there are tools that encourage and enable users to seek information beyond their ‘bubble’. The EUNOMIA feature illustrating how information has been spread and changed by different users aims exactly to show how similar information is discussed in different ‘bubbles’.


An, J., Quercia, D., & Crowcroft, J. (2013). Fragmented Social Media: A Look into Selective Exposure to Political News. WWW 2013 Companion, May 13–17, 2013, Rio de Janeiro, Brazil. ACM 978-1-4503-2038-2/13/05.

Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.

Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343.

Klapper, J. T. (1957). What We Know About the Effects of Mass Communication: The Brink of Hope. The Public Opinion Quarterly, 21(4), 453-474.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.

Nikolov, D., Oliveira, D. F., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles. PeerJ Computer Science, 1(38), 1-14.

Pariser, E. (2011). The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think. New York: Penguin.

Journalists & Social Media – An impossible Love-Affair? The traditional media journalist’s view

September 11th, 2020 by

Journalists can be often accused of producing and spreading ‘fake news’; usually by those being in charge when the journalists are addressing “disagreeable” topics in relation to them.

In fact, all journalists – and ‘traditional journalists’ in particular – are trying to fight misinformation; especially those working for ’quality-media’ and public-service-media-institutions. Traditional journalists are rather obliged by their professional code of ethics and their institutions to do proper research. They have high ethical standards and rules to avoid introducing and spreading of misinformation as well as of information that can be misunderstood and misinterpreted.

Proper research takes a lot of time: checking facts and figures, and re-checking every bit of information is must in journalism. However, doing research often impedes the fast publication of breaking news, which can be a big competitive disadvantage. Digital sources can therefore often be seen as ideal for quick and successful journalistic research. Can social media be part of this “digital toolkit” for traditional journalists?

Although social media is seen as a medium that makes new trends and topics available in a very fast way, the quality of data and the level of biased and misguided information can be against the requirements of proper journalism. Therefore, traditional journalists might use social media platforms to pick up certain trends, but cannot rely on them as information sources. Information on social media is not always verified as well as the source of information that can be unknown. Furthermore, social media promote the creation of “filter bubbles” meaning that users tend to surround themselves with similar information. At the same time, social media platforms are an important ‘market-place’ to solicit traditional journalism ‘products’ as they reach new generation of readers and across their usual ‘circles’ e.g., beyond country borders. All in all, the increasing consumption of news on social media proves problematic for a traditional journalist.

EUNOMIA is the silver-lining promoting critical media literacy skills of social media users/ readers whereas, for traditional journalism, this solution has the potential of opening a new area of trustworthy information exchange. For the very first time, the users themselves are enabled to assess and vote the trustworthiness of social media information based on users’ driven indicators. For assessing trustworthiness, the source of information is considered a key indicator. Therefore, EUNOMIA’s toolkit will encourage users to provide sources thus, promoting the work of traditional media institutions as they are still seen as the most reliable source for important and proper information (Eurobarometer, 2017). Furthermore, through EUNOMIA, the traditional journalists will now be able to track each bit of information found in social media down to the very source, and they will be supported by the community in evaluating the quality and trustworthiness of a given information.

EUNOMIA and the University of Nicosia Decentralized Chapters Community

September 8th, 2020 by

The University of Nicosia Decentralized Chapters is a global community consisting of like-minded individuals that share a common passion in the space of blockchain technology and cryptocurrencies. The main objective of the community is the dissemination of blockchain technology awareness and knowledge in several regions around the world. The aim is to address the lack of knowledge in general of the potential benefits of blockchain in terms of growth and innovation, as well as the lack of skills in the area through various activities for a holistic perspective of the technology.

Decentralized Chapter activities include, but are not limited to:

  • Education/Training
  • Speaker series/free lectures
  • Workshops and seminars
  • Support for Startups
  • Volunteer Activities

In order to provide the Decentralized community hands-on real decentralized applications, its members were invited to join and test the Decentralized Community Social network platform built as part of the H2020 project EUNOMIA for online discussion on Blockchain technology and Cryptocurrencies.  Whether to ask questions and learn, or post news, even rumours for which they want the community’s opinion, this is the right place. The intuitive Twitter-like interface of the federated social network Mastodon and the additional EUNOMIA’s toolkit features such as the trust /don’t trust buttons assists members of the Decentralized community to discuss and express their trust/mistrust of the latest information on Blockchain and cryptocurrencies. EUNOMIA’s features and updates such as information provenance supports their evaluation of online information aiming at sustaining the quality of community’s interactions.

Members that have already joined the Decentralized Social Network Platform, have access to:

  • The latest news on Decentralized Chapter’s activities
  • Early access to Decentralized Chapters’ videos and the chance to discuss with presenters
  • Videos and Links related to Blockchain and decentralized technology developments shared and posted by community members
  • Ask technical questions on Blockchain and decentralized ledger technologies and get answers from community experts
  • Access to information on upcoming events, conferences, webinars, learning and educational material in a single place
  • A growing community space for Decentralized professionals and enthusiasts to share and learn about Blockchain technology
  • And last but not least – interaction with the technology itself!

Join the Decentralized community or lead your own Chapter at

Join the EUNOMIA Decentralised community instance on Mastodon at

For further information contact

Empowering the social media user to assess information trustworthiness; Image similarity detection

September 7th, 2020 by

With the overarching objective to assist users in determining the trustworthiness of information in social media using an intermediary-free approach,  EUNOMIA employs a decentralised architecture Mastodon instance and implements AI technology to generate information cascade of the posts to facilitate the discovery and visualisation of the source of information, how information is shared and changed over time to provide users with provenance information when they are determining a post’s trustworthiness. The information cascade is generated not only based on the text content of a post via paraphrase identification using natural language processing (NLP) technique, but also the image content of the post via image verification using computer vision technique. 

Image verification algorithm is implemented with the aim to determine whether a given pair of images are similar or not in terms of images similarity. The advancements in image verification field is in two broad areas: image embedding and metric learning based. In image embedding, a robust and discriminative descriptor is learnt to represent each image as a compact feature vector/embedding. EUNOMIA employs current state-of-the-art feature descriptors generated by existing convolutional neural network (CNN) which learns features on its own. In metric based learning, a distance metric is utilised to learn from CNN-embeddings in an embedding space to effectively measure the similarity of images. Identical images obtain 100% in similarity; similar images gain higher similarity score; different images and some of the adversarial images would have lower similarity score as shown below.

With the implementation of image similarity functionality, EUNOMIA generates the information cascade by considering both text and image information of a social media post. EUNOMIA platform also has the potential to involve in fetching similar look images give a reference image to a EUNOMIA user.