EUNOMIA at SOCINFO2020: Challenging Misinformation; Exploring Limits and Approaches

December 22nd, 2020 by

EUNOMIA project joined forces with H2020 project Co-Inform delivering together the workshop “Challenging Misinformation: Exploring Limits and Approaches” at the Social Informatics Conference 2020 (SocInfo2020) on 6th October 2020.

Pinelopi Troullinou (Trilateral Research) and Diotima Bertel (SYNYO) from EUNOMIA project invited researchers and practitioners to reflect on the existing approaches and the limitations of current socio-technical solutions to tackle misinformation. The objective of the workshop was to bring together stakeholders from diverse backgrounds to develop collaborations and synergies towards the common goal of social media users’ empowerment.

Four papers were presented at the workshop; Gautam Kishore Shahi from the University of Duisburg-Essen in Germany discussed the different conspiracy theories related to COVID-19 spread in the web and the challenges of their correction. Furthermore, he delivered a second presentation from his team regarding the impact of fact-checking integrity on public trust. Markus Reiter-Haas from Graz University of Technology and Beate Klosh from the University of Graz, Austria, discussed polarisation in public opinion across different topics of misinformation. Lastly, Alicia Bargar and Amruta Deshpande explored the issue of affordances across different platforms and how this corresponds to different types of vulnerability to misinformation.

The second part of the workshop included a hands-on activity allowing for deeper discussions. A scenario was presented to the participants according to which citizens, journalists and policymakers needed support to distinguish fact from fiction in the context of COVID-19 “infodemic”. Following, they were invited to reflect on the existing best tools and identify their limits. The discussion showed that participants generally referred to two types of tools. Tools that assist users assessing information trustworthiness based on specific characteristics, or that direct them to trustworthy sources, or that provide information cascade (mainly image or film) were brought forward. At the same time, the benefits of tools that enable social media users to think before they share encouraging them to critically engage with information were discussed. The limits of these tools focused on the automation technologies used. Furthermore, it was noted that such tools can still be complex for the average social media users and demand a level of digital literacy.

The last part of the workshop was dedicated to synergies and collaborations among the participants. Potential research project ideas were discussed. Participants also welcomed the invitation to contribute to the EUNOMIA’s edited volume. The book will focus on issues around human and societal factors of misinformation and approaches and limitations of sociotechnical solutions.

Fighting and coping with misinformation in pandemic crises; COVINFORM Project kicks off featuring two EUNOMIA partners

December 22nd, 2020 by

COVID-19 has been categorised as an infodemic by WHO. It is the first pandemic where social media has been used on such a wide scale to both share protective information and also false information, including misinformation and disinformation. Those groups that are recognised as most vulnerable to COVID-19 may also be most vulnerable to believing and engaging with misinformation (Vijaykumar, 2020). As responses to COVID-19 misinformation has resulted in injuries and fatalities, it is important to address this (Coleman, 2020).

November 2020 saw the EC funded COVINFORM project (Grant Agreement No. 101016247) kick off. The three-year project focuses on analysing and understanding the impact of COVID-19 responses on vulnerable and marginalised groups. COVINFORM features two EUNOMIA partners, Trilateral Research and SYNYO, who will draw on their expertise and knowledge gained during the EUNOMIA project to develop guidance and recommendations for designing effective COVID-19 communication and combating misinformation.

In response to this challenge, WP7 of the COVINFORM project focuses on inclusive COVID-19 communication for behaviour change and misinformation. It will conduct an in-depth analysis of malinformation, disinformation and misinformation to identity insights on how misinformation might affect different groups differently and produce recommendations to fight and cope with misinformation during COVID-19 and future pandemic crises.

For further information, please visit the project website ( or follow the project on Twitter (                                                                                                                                        

COVINFORM is one of 23 new research projects funded by the European Commission with a total of €128 million to address the continuing coronavirus pandemic and its effects. The press release covering the project’s launch is available here.

EUNOMIA at the Industry Forum of GlobeCom 2020

December 22nd, 2020 by

In the era of COVID-19 pandemic, social media have become a dominant, direct and highly effective form of news generation and sharing at a global scale. This information is not always trustworthy as exemplified by the wide spread of misinformation that proved dangerous for public health. Prof. Charalampos Patrikakis from the University of West Attica -partner of EUNOMIA project- co-organised an event focusing on “Fighting Misinformation on Social Networks” at the Industry Forum session of the Global Communications Conference 2020. GlobeCom2020 is one of the IEEE Communications Society’s two flagship conferences dedicated to driving innovation in nearly every aspect of communications.

The event included presentations by academics and industry representatives followed by an open discussion. Prof. Patrikakis delivered a presentation on “EUNOMIA project: a decentralized approach to fighting fake news”. His presentation referred to the concept of EUNOMIA on the adaptation of information hygiene routines for protection against the ‘infodemic’ of rapidly spreading misinformation. Moreover, EUNOMIA presentation included a more extensive graphic description of the project’s toolkit with its four interrelated functional components: The information cascade, Human-as-Trust-Sensor interface, Sentiment and subjectivity analysis and the Trustworthiness scoring. Participants were also invited to register on EUNOMIA in order to see how this works in real-time.

The pathway to trustworthiness assessment; Sentiment Analysis identification

December 14th, 2020 by

As the amount of content online grows exponentially, new networks and interactions are also growing tremendously fast. EUNOMIA user’s trustworthiness indicators provide a boost towards a fair and balanced social network interaction.

Sentiment analysis is one of EUNOMIA’s trustworthiness indicators assisting users to assess the trustworthiness of online information. It relies on the automatic identification of the sentiment expressed in a user post (negative, positive, or neutral). A sentiment analysis algorithm employs principles from the scientific fields of machine learning and natural language processing. Current trends in the field include AI techniques that outperform traditional dictionary-based approaches and provide unparalleled performance.

Dictionary-based techniques work as follows:  A list of opinion words such as adjectives (i.e. excellent, love, supports, expensive, terrible, hate, complicated), nouns, verbs and word phrases constitute the prior knowledge for extracting the sentiment polarity of a piece of text. For example, in “I love playing basketball” a dictionary-based method would identify and consider the word “love” to infer the positive polarity of the expression.

Figure 1. Sentiment Analysis of user opinions

Unfortunately, these methods are unable to grasp long-range sentiment dependencies, sentiment fluctuations or opinion modifiers (i.e. not so much expensive, less terrible etc.) that exist in abundance in user-generated text.

Figure 2. Demo of how the core of the sentiment analysis component works in EUNOMIA.

We use two models that process user generated content in parallel. The first model relies on sentiment patterns to extract polarity. For example in “not so much expensive” the model would identify the relation between “not” and “expensive” and would assign positive polarity in  comparison to a dictionary-based method that would only rely on the negative word “expensive”.

The second model is an advanced machine learning model, that relies on a trained neural network and it can identify sentiment fluctuations of longer range. Therefore, the first model (pattern-based) relies on sentiment patterns to extract the sentiment orientation, while the second, relies on a neural network that is trained on labeled data and is capable of distinguishing between positive/neutral/negative text with high accuracy.

The output of both models is processed by an ensemble algorithm that decides on the final sentiment classification and the degree that the models are confident about their predictions.

The results of the sentiment analysis process provide one of EUNOMIA’s indicators. Sentiment and emotion in language is connected quite frequently with subjectivity and on many occasions with decietful information. EUNOMIA raises an alert and then the user, by consulting additional meta-information like EUNOMIA’s other indicators can investigate the content further and decide if it is valid and can be safely consumed or shared further to the community.

Pantelis Agathangelou, PhD Candidate, University of Nicosia

The featured photo is by Domingo Alvarez E on Unsplash

Eye-witness videos of a terrorist attack – stop and think before you share!

November 10th, 2020 by

At 20.00 on Monday, 2 November 2020, a terrorist attack took place in Vienna, Austria.[1] Four civilians and the attacker were killed, more than 20 people including a police officer were injured. Like with any other terrorist attack in the past years, the event was accompanied by a plethora of speculations, rumours, misinformation as well as real eye-witness videos posted on social media, especially during and in the immediate aftermaths of the attack. Social media play an increasing role in such events; while sharing information and videos can unite people in a shared feeling of experience throughout such events, the distribution of this kind of information is problematic for several reasons.

First and foremost, sharing videos of attacks gives terrorists a stage assisting them to fulfil their goals: terrorists need publicity to scare people and destabilise societies. Surely, it is important to provide information about such events, to warn the population of the attack. However, it is also crucial to avoid giving terrorists the chance to divide societies, to create the hatred they intend to create. Furthermore, eye-witness videos may cause psychological distress and reach audiences such as minors and youths. In addition, it may jeopardise the police investigation and put police officers at risk. During the attack in Vienna, police repeatedly asked to submit videos to a dedicated (closed) channel and avoid sharing any rumours and videos on social media. However, rumours did spread indeed, misinforming the population about kidnappings, other cities being attacked, the number of victims and attackers, the reasons and motivations behind the attack, and… the list goes on.

Cases like this prove the importance of following information hygiene guidelines. Specifically, the recommendation to ‘stop and think before you share’ goes beyond sharing misinformation. It targets the responsibility each and every member of a society. Especially during breaking events, it is crucial to be careful what information to share: mainstream media may be misinformed, and anonymous or other, non-trustworthy sources may deliberately or by accident share rumours. Even in the case of factual information, it still might be better to refrain from sharing disturbing videos – due to the abovementioned reasons. This does not mean we should stop talking about such events, and it certainly does not mean any kind of censorship. But it means to think about the consequences of what we say and share on media with such a broad audience.

As such, social media users need to be trained to stop and think about the consequences and implications of a post before they share it, or refrain from sharing: when in doubt, don’t share. EUNOMIA provides a set of tools that support this process, providing indicators and information on the provenance of information.

[1] see e.g.

Interested in finding out how good you are at telling misinformation in areas you may or may not know much about? Take part in EUNOMIA’s first pilot!

October 6th, 2020 by

Are you confident that you can always determine the trustworthiness of what you read on social media? What if you don’t know much about a topic? Can you still do it? Sign up to the decentralized chapters EUNOMIA social media platform (powered by Mastodon) and try to find out the 10 trustworthy and 10 untrustworthy posts that our mischievous researchers will inject (based on the researchers’ scientific expertise) in a discussion of decentralized technologies between 5th and 14th October. For this very specific period of time and only, your selections will be recorded centrally by us so that we determine who has got the most of these 20 correct by the end.

How to join our competition in two simple steps:

  1. Make an account here
  2. Log in
  3. Click on “Local” and wait a moment for the “I trust this”, “I don’t trust this” buttons to appear (might take a few moments because it’s still early version)
  4. Decide whether you “trust” or you “don’t trust” the posts clicking the respective icon. 

You win points for correctly marking posts as trustworthy or untrustworthy, and lose if you get them wrong. For every correct selection, you will get +1 point and for every incorrect -1 point. For the top scorers, which will be announced the week after, the University of Nicosia has prepared a range of very attractive prizes.

In case of any tie, the winner will be the one who had the lowest average response time in their correct answers. The response time is the time difference from when one of the 20 posts was made by the researchers to the time that the participant correctly selected “I trust this” or “I don’t trust this”.

Some advice you may want to use, not only for this competition, but more widely in social media:

  • Be wary of popular posts. Misinformation travels a lot faster than reliable information
  • Be cautious of information forwarded to you through your network
  • Refrain from sharing based only on headline
  • Be wary of resharing information solely for its high novelty. Misinformation tends to be more novel.
  • Be wary of language that is making you feel emotional. It is designed to become viral, not to inform.
  • Be mindful of your emotions when reading a post. Anger makes you susceptible to partisanship.

Note that this is an early experimental version of EUNOMIA. It will be slow and the “I trust this” buttons and other EUNOMIA functionality may not appear immediately. Bare with it please for a few moments 🙂

You can find further info on UNIC’s programs here 

The Decentralized site can be found here  

Implicit trustworthiness assessment based on users’ reactions claims

September 15th, 2020 by

Online textual information has been increased tremendously over the years, leading to the demand of information verification. As a result, Natural Language Processing (NLP) research on tasks such as stance detection (Derczynski et al., 2017) and fact verification (Thorne et al., 2018) is gaining momentum, as an attempt to automatically identify misinformation over the social networks (e.g., Mastodon and Twitter).

To that end, within the scope of EUNOMIA a stance classification model was trained, which involves identifying the attitude of EUNOMIA-consent Mastodon users towards the truthfulness of the rumour they are discussing. In particular, transfer learning was applied to fine tune the RoBERTa (Robustly optimized BERT) model (Liu et al., 2019) using the public available dataset SemEval 2019 Subtask 7A (Gorrell et al., 2019). This dataset contains Twitter threads and each tweet (e.g., Hostage-taker in supermarket siege killed, reports say. #ParisAttacks –LINK) in the tree-structured thread is categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to (e.g., I’ve heard that also).
  • Deny: the author of the response denies the veracity of the rumour they are responding to (e.g., That’s a lie).
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to (e.g., Really?).
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to (e.g., True tragedy).

Our model achieved 85.1% accuracy and 62.75 % F1-score macro. Due to the fact that this dataset includes posts using arbitrary ways of language (e.g., OMG that aint right ) the obtained scores are not spectacular, but even so, our approach surpasses the state-of-the-art results (i.e., 81.79% accuracy and 61.87% F1-score) for this dataset  (Yang et al., 2019).

The service has been containerized and will be soon integrated with the rest of the EUNOMIA platform as another useful trustworthiness indicator for the users.


Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. SemEval@ACL.

Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., & Zubiaga, A. (2019). SemEval-2019 Task 7: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of SemEval. ACL.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.

Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. ArXiv, abs/1803.05355.

Yang, R., Xie, W., Liu, C., & Yu, D. (2019). BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation. SemEval@NAACL-HLT.

Me, myself and I: Filter Bubbles & Echo Chambers

September 11th, 2020 by

‘Filter bubbles’ and ‘echo chambers’ are popular terms to describe the phenomenon social scientists call ‘selective exposure’. The theory of selective exposure (Klapper, 1957) in brief states that people tend to select information which are in accord with their existing likes, and consequently avoid information that contradicts their beliefs and values.

Different digital tools, algorithms and behaviours rely on the collection of personal data to filter and/or rank items in the daily information stream, creating ‘filter bubbles’ and ‘echo chambers’. As a consequence, they result in a higher personalisation, but also a decreasing diversity of information. Diversity of information may refer to either source or content. Source diversity means the inclusion of a multitude of information sources by a news outlet, as well as the variety of news outlets consumed by a recipient. Content diversity means the range of topics and perspectives on a given topic (Haim, Graefe, & Brosius, 2018).

Despite describing similar phenomena, ‘filter bubbles’ and ‘echo chambers’ are not the same concept. ‘Echo chambers’, on the one hand, describe the phenomenon of being surrounded by link-minded contacts. This might lead to an amplification or reinforcement of pre-existing beliefs. ‘Filter bubbles’, on the other hand, refer to the algorithmic filtering of information to match a user’s needs (Haim, Graefe, & Brosius, 2018). However, there is no consistency in the use of both terms; for example, Lewandowsky et al (2017) describe ‘echo chambers’ as the space where “most available information conforms to pre-existing attitudes and biases” (p. 359).

Studies have shown that people are more likely to share articles with which they agree (An, Quercia, Cha, Gummadi, & Crowcroft, 2014) and that social media expose the community to a narrower range of information sources, compared to a baseline of information seeking activities. Research has also shown that the diversity of social media communication is significantly lower than the one of interpersonal communication, both on an individual and collective level (Nikolov, Oliveira, Flammini, & Menczer, 2015).

But why do people surround themselves with like-minded contacts, why do they choose information that confirms what they already believe? There are different answers to this question. The theory of cognitive dissonance (Festinger, 1957) explains this phenomenon arguing that individual strives for consistency (or consonance) of their believes, attitudes, knowledge etc. Inconsistencies cause psychological discomfort, which Festinger calls dissonance. Another answer is that surrounding oneself with familiar information helps to cope with or even overcome information overload (Pariser 2011). A third answer refers to the social aspect of social media: because of its sharing mechanisms, discovering information becomes a social endeavour rather than an individual process.

In the context of social media, ‘filter bubbles’ and ‘echo chambers’ therefore allow users to avoid psychological discomfort and information overload and to engage in a social endeavour in the process of information seeking. However, they pose great risks leading to self-reinforcement and reduced information diversity (Haim, Graefe, & Brosius, 2018). The tendency to surround oneself with like-minded opinions might also prevent an engagement with other ideas. This can facilitate confirmation bias and polarisation (Nikolov, Oliveira, Flammini, & Menczer, 2015; Haim, Graefe, & Brosius, 2018).

But it’s not all bad news: ‘echo chambers’ seem to be focused mainly on political discourse (Nikolov, Oliveira, Flammini, & Menczer, 2015) whereas other topic areas are less affected. Furthermore, there are tools that encourage and enable users to seek information beyond their ‘bubble’. The EUNOMIA feature illustrating how information has been spread and changed by different users aims exactly to show how similar information is discussed in different ‘bubbles’.


An, J., Quercia, D., & Crowcroft, J. (2013). Fragmented Social Media: A Look into Selective Exposure to Political News. WWW 2013 Companion, May 13–17, 2013, Rio de Janeiro, Brazil. ACM 978-1-4503-2038-2/13/05.

Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.

Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343.

Klapper, J. T. (1957). What We Know About the Effects of Mass Communication: The Brink of Hope. The Public Opinion Quarterly, 21(4), 453-474.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.

Nikolov, D., Oliveira, D. F., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles. PeerJ Computer Science, 1(38), 1-14.

Pariser, E. (2011). The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think. New York: Penguin.

Journalists & Social Media – An impossible Love-Affair? The traditional media journalist’s view

September 11th, 2020 by

Journalists can be often accused of producing and spreading ‘fake news’; usually by those being in charge when the journalists are addressing “disagreeable” topics in relation to them.

In fact, all journalists – and ‘traditional journalists’ in particular – are trying to fight misinformation; especially those working for ’quality-media’ and public-service-media-institutions. Traditional journalists are rather obliged by their professional code of ethics and their institutions to do proper research. They have high ethical standards and rules to avoid introducing and spreading of misinformation as well as of information that can be misunderstood and misinterpreted.

Proper research takes a lot of time: checking facts and figures, and re-checking every bit of information is must in journalism. However, doing research often impedes the fast publication of breaking news, which can be a big competitive disadvantage. Digital sources can therefore often be seen as ideal for quick and successful journalistic research. Can social media be part of this “digital toolkit” for traditional journalists?

Although social media is seen as a medium that makes new trends and topics available in a very fast way, the quality of data and the level of biased and misguided information can be against the requirements of proper journalism. Therefore, traditional journalists might use social media platforms to pick up certain trends, but cannot rely on them as information sources. Information on social media is not always verified as well as the source of information that can be unknown. Furthermore, social media promote the creation of “filter bubbles” meaning that users tend to surround themselves with similar information. At the same time, social media platforms are an important ‘market-place’ to solicit traditional journalism ‘products’ as they reach new generation of readers and across their usual ‘circles’ e.g., beyond country borders. All in all, the increasing consumption of news on social media proves problematic for a traditional journalist.

EUNOMIA is the silver-lining promoting critical media literacy skills of social media users/ readers whereas, for traditional journalism, this solution has the potential of opening a new area of trustworthy information exchange. For the very first time, the users themselves are enabled to assess and vote the trustworthiness of social media information based on users’ driven indicators. For assessing trustworthiness, the source of information is considered a key indicator. Therefore, EUNOMIA’s toolkit will encourage users to provide sources thus, promoting the work of traditional media institutions as they are still seen as the most reliable source for important and proper information (Eurobarometer, 2017). Furthermore, through EUNOMIA, the traditional journalists will now be able to track each bit of information found in social media down to the very source, and they will be supported by the community in evaluating the quality and trustworthiness of a given information.

EUNOMIA and the University of Nicosia Decentralized Chapters Community

September 8th, 2020 by

The University of Nicosia Decentralized Chapters is a global community consisting of like-minded individuals that share a common passion in the space of blockchain technology and cryptocurrencies. The main objective of the community is the dissemination of blockchain technology awareness and knowledge in several regions around the world. The aim is to address the lack of knowledge in general of the potential benefits of blockchain in terms of growth and innovation, as well as the lack of skills in the area through various activities for a holistic perspective of the technology.

Decentralized Chapter activities include, but are not limited to:

  • Education/Training
  • Speaker series/free lectures
  • Workshops and seminars
  • Support for Startups
  • Volunteer Activities

In order to provide the Decentralized community hands-on real decentralized applications, its members were invited to join and test the Decentralized Community Social network platform built as part of the H2020 project EUNOMIA for online discussion on Blockchain technology and Cryptocurrencies.  Whether to ask questions and learn, or post news, even rumours for which they want the community’s opinion, this is the right place. The intuitive Twitter-like interface of the federated social network Mastodon and the additional EUNOMIA’s toolkit features such as the trust /don’t trust buttons assists members of the Decentralized community to discuss and express their trust/mistrust of the latest information on Blockchain and cryptocurrencies. EUNOMIA’s features and updates such as information provenance supports their evaluation of online information aiming at sustaining the quality of community’s interactions.

Members that have already joined the Decentralized Social Network Platform, have access to:

  • The latest news on Decentralized Chapter’s activities
  • Early access to Decentralized Chapters’ videos and the chance to discuss with presenters
  • Videos and Links related to Blockchain and decentralized technology developments shared and posted by community members
  • Ask technical questions on Blockchain and decentralized ledger technologies and get answers from community experts
  • Access to information on upcoming events, conferences, webinars, learning and educational material in a single place
  • A growing community space for Decentralized professionals and enthusiasts to share and learn about Blockchain technology
  • And last but not least – interaction with the technology itself!

Join the Decentralized community or lead your own Chapter at

Join the EUNOMIA Decentralised community instance on Mastodon at

For further information contact