EUNOMIA’s project coordinator Prof. George Loukas on Blasting Talks

January 10th, 2021 by

Prof. George Loukas, EUNOMIA project coordinator and Head of Internet of Things and Security Research Group, featured on Blasting Talks. He talked about EUNOMIA and the project’s approach in tackling misinformation placing the user in the centre of toolkit’s design and development.

Read the full interview here

Pinelopi Troullinou EUNOMIA’s partner from Trilateral Research on Blasting Talks

December 20th, 2020 by

Pinelopi Troullinou, Research Analyst at Trilateral Research, in an interview with Blasting Talks, explains the importance of end-users in the project. Through co-design methods, they provide their needs and preferences feeding into the development of EUNOMIA toolkit. Furthermore, Pinelopi explains that the project adopts a Privacy, Ethical and Social Impact Assessment (PIA+) making sure that it respects ethical and societal values. EUNOMIA aims to shift the social media culture of “like” to “trust” triggering users to reflect when engaging with information online. In this context, EUNOMIA provides the tools to support social media users to adopt an “information hygiene routine” protecting themselves and their network against misinformation.

Read the full interview here

EUNOMIA’s partner Sorin Adam Matei from SIMAVI in an interview with Blasting Talks

November 29th, 2020 by

Sorin Adam Matei EUNOMIA’s partner representing SIMAVI and professor at Purdue University featured at Blasting Talks. He highlighted the project’s approach encouraging social media users to reflect on their engagement with information online. EUNOMIA does not dictate which information to be trusted or not. Instead, we encourage users to deliberate on online information providing tools to assist this process.

You can read the full article here

EUNOMIA at SOCINFO2020: Challenging Misinformation; Exploring Limits and Approaches

October 30th, 2020 by

EUNOMIA project joined forces with H2020 project Co-Inform delivering together the workshop “Challenging Misinformation: Exploring Limits and Approaches” at the Social Informatics Conference 2020 (SocInfo2020) on 6th October 2020.

Pinelopi Troullinou (Trilateral Research) and Diotima Bertel (SYNYO) from EUNOMIA project invited researchers and practitioners to reflect on the existing approaches and the limitations of current socio-technical solutions to tackle misinformation. The objective of the workshop was to bring together stakeholders from diverse backgrounds to develop collaborations and synergies towards the common goal of social media users’ empowerment.

Four papers were presented at the workshop; Gautam Kishore Shahi from the University of Duisburg-Essen in Germany discussed the different conspiracy theories related to COVID-19 spread in the web and the challenges of their correction. Furthermore, he delivered a second presentation from his team regarding the impact of fact-checking integrity on public trust. Markus Reiter-Haas from Graz University of Technology and Beate Klosh from the University of Graz, Austria, discussed polarisation in public opinion across different topics of misinformation. Lastly, Alicia Bargar and Amruta Deshpande explored the issue of affordances across different platforms and how this corresponds to different types of vulnerability to misinformation.

The second part of the workshop included a hands-on activity allowing for deeper discussions. A scenario was presented to the participants according to which citizens, journalists and policymakers needed support to distinguish fact from fiction in the context of COVID-19 “infodemic”. Following, they were invited to reflect on the existing best tools and identify their limits. The discussion showed that participants generally referred to two types of tools. Tools that assist users assessing information trustworthiness based on specific characteristics, or that direct them to trustworthy sources, or that provide information cascade (mainly image or film) were brought forward. At the same time, the benefits of tools that enable social media users to think before they share encouraging them to critically engage with information were discussed. The limits of these tools focused on the automation technologies used. Furthermore, it was noted that such tools can still be complex for the average social media users and demand a level of digital literacy.

The last part of the workshop was dedicated to synergies and collaborations among the participants. Potential research project ideas were discussed. Participants also welcomed the invitation to contribute to the EUNOMIA’s edited volume. The book will focus on issues around human and societal factors of misinformation and approaches and limitations of sociotechnical solutions.

Interested in finding out how good you are at telling misinformation in areas you may or may not know much about? Take part in EUNOMIA’s first pilot!

October 6th, 2020 by

Are you confident that you can always determine the trustworthiness of what you read on social media? What if you don’t know much about a topic? Can you still do it? Sign up to the decentralized chapters EUNOMIA social media platform (powered by Mastodon) and try to find out the 10 trustworthy and 10 untrustworthy posts that our mischievous researchers will inject (based on the researchers’ scientific expertise) in a discussion of decentralized technologies between 5th and 14th October. For this very specific period of time and only, your selections will be recorded centrally by us so that we determine who has got the most of these 20 correct by the end.

How to join our competition in two simple steps:

  1. Make an account here
  2. Log in
  3. Click on “Local” and wait a moment for the “I trust this”, “I don’t trust this” buttons to appear (might take a few moments because it’s still early version)
  4. Decide whether you “trust” or you “don’t trust” the posts clicking the respective icon. 

You win points for correctly marking posts as trustworthy or untrustworthy, and lose if you get them wrong. For every correct selection, you will get +1 point and for every incorrect -1 point. For the top scorers, which will be announced the week after, the University of Nicosia has prepared a range of very attractive prizes.

In case of any tie, the winner will be the one who had the lowest average response time in their correct answers. The response time is the time difference from when one of the 20 posts was made by the researchers to the time that the participant correctly selected “I trust this” or “I don’t trust this”.

Some advice you may want to use, not only for this competition, but more widely in social media:

  • Be wary of popular posts. Misinformation travels a lot faster than reliable information
  • Be cautious of information forwarded to you through your network
  • Refrain from sharing based only on headline
  • Be wary of resharing information solely for its high novelty. Misinformation tends to be more novel.
  • Be wary of language that is making you feel emotional. It is designed to become viral, not to inform.
  • Be mindful of your emotions when reading a post. Anger makes you susceptible to partisanship.

Note that this is an early experimental version of EUNOMIA. It will be slow and the “I trust this” buttons and other EUNOMIA functionality may not appear immediately. Bare with it please for a few moments 🙂

You can find further info on UNIC’s programs here 

The Decentralized site can be found here  

Empowering the social media user to assess information trustworthiness; Image similarity detection

September 7th, 2020 by

With the overarching objective to assist users in determining the trustworthiness of information in social media using an intermediary-free approach,  EUNOMIA employs a decentralised architecture Mastodon instance and implements AI technology to generate information cascade of the posts to facilitate the discovery and visualisation of the source of information, how information is shared and changed over time to provide users with provenance information when they are determining a post’s trustworthiness. The information cascade is generated not only based on the text content of a post via paraphrase identification using natural language processing (NLP) technique, but also the image content of the post via image verification using computer vision technique. 

Image verification algorithm is implemented with the aim to determine whether a given pair of images are similar or not in terms of images similarity. The advancements in image verification field is in two broad areas: image embedding and metric learning based. In image embedding, a robust and discriminative descriptor is learnt to represent each image as a compact feature vector/embedding. EUNOMIA employs current state-of-the-art feature descriptors generated by existing convolutional neural network (CNN) which learns features on its own. In metric based learning, a distance metric is utilised to learn from CNN-embeddings in an embedding space to effectively measure the similarity of images. Identical images obtain 100% in similarity; similar images gain higher similarity score; different images and some of the adversarial images would have lower similarity score as shown below.

With the implementation of image similarity functionality, EUNOMIA generates the information cascade by considering both text and image information of a social media post. EUNOMIA platform also has the potential to involve in fetching similar look images give a reference image to a EUNOMIA user.

Practicing information hygiene routine to flatten the curve of the ‘infodemic’ – EUNOMIA project’s recommendations

August 31st, 2020 by

The Covid-19 outbreak has raised afresh the debate about the dangers of misinformation on social media. During the time of the pandemic, myths about coronavirus cures and treatments, its origins and the reasons behind it were widely spread on social network platforms leading in cases to dangerous and even fatal actions such as bleach consumption. To this end, António Guterres, the Secretary-General of the United Nations, urged for the need to address the ‘infodemic’ of misinformation.

Gaps in information hygiene guidelines

Framing misinformation within such a context places social media users in the centre of this multi-layered social phenomenon and demands a new appropriate approach to address it. To this end, social media users need to adopt what we call ‘information hygiene routines’ to protect themselves and their network against the ‘infodemic’ of rapidly spreading misinformation. We define Information hygiene routine as the practice of evaluating online information so to minimise the risk of consuming and spreading misinformation to one’s network. This practice significantly differs to fact checkers and fake news detection focusing on actively detecting and identifying ‘pathogens’ rather than on daily routine aiming to avoid “infection”.

Information hygiene guidelines such as “check the source of information”, “check whether the account is a bot”, and “flag untrustworthy information for the benefit of others” are regularly recommended by fact checkers, journalists and media literacy experts to help limit the spread of misinformation. No doubt such recommendations are very important but they are often too time-consuming or too difficult and complicated for the users to adopt as part of their everyday routine.

Illusory Truth Effect

European H2020 funded project EUNOMIA addresses this gap by developing tools to assist social media users in practicing information hygiene routines so to flatten the curve of ‘infodemic’. EUNOMIA toolkit cultivates media literacy skills empowering social media users to evaluate themselves the trustworthiness of online information. While trustworthiness is related to truthfulness, these concepts differ significantly. People do not always seek to verify whether online information is true or not. In some cases, the verification process can be very complex and difficult. Trustworthiness in this sense can be considered more important when consuming and spreading information. In fact, a person is inclined to perceive information as trustworthy and credible just because they are very familiar with it. This is what is called “illusory truth effect” in social psychology. Trustworthiness then is a subjective quality and, therefore, is in the eyes of the beholder. To this end, tools supporting the individual evaluation of trustworthiness are key to slow down the spread of misinformation and minimise its risks.

EUNOMIA project’s approach

EUNOMIA is adopting a positive-first approach to the information trustworthiness challenge in social media which empowers users to critically assess the information they consume and protect their network against misinformation spread. EUNOMIA provides a toolkit in the form of a social media companion that can currently be implemented in decentralised and open access social media platforms such as Mastodon and Diaspora*. The social media companion offers multiple trustworthiness indicators for users to select and display their preferred ones to support their assessment. This may include indicators of bot activity, such as the ratio of followers to following, and other indicators co-developed with social media users themselves or identified in the scientific literature such as the objectivity of a post. EUNOMIA also visualises the modifications of online information in between different users’ posts in an information cascade. This means that EUNOMIA users can see how a piece of information might have changed when shared or re-shared by different users and/or in different periods of time. So, the user can see all the different versions of the same piece of information and the ‘journey’ of potential modification conducted.

EUNOMIA encourages the active and collective participation of social media users to stop the spread of misinformation. Adopting user contribution guidelines, such as the recommendation to ‘flag untrustworthy information for the benefit of others’, EUNOMIA enables users to vote on content trustworthiness and act as trust-reference in their network. The number of votes constitutes one of the several trustworthiness indicators that might be used by other users to evaluate the information trustworthiness.

EUNOMIA project’s recommendations

EUNOMIA has developed the first systematic set of information hygiene recommendations that fall in four categories:

a) source of information

b) content

c) language

d) action to mitigate risk.

This set emerged from thorough desk-based research leading in identification and analysis of a large number of guidelines available online. These guidelines were then evaluated based on their practicality and evidence of their effectiveness. The identification and evaluation of the guidelines was conducted by an interdisciplinary team of EUNOMIA researchers assessing their practicality in terms of expertise and time required by the users to adopt. Similarly, the effectiveness of the guidelines was based on scientific evidence. The set of recommendations resulted – such as “Check whether the author is anonymous”, “check whether the language is used to make you emotional”- will be tested with end-users and inform the further development of EUNOMIA toolkit.

Disclaimer: This post was first published on Trilateral Research website