EUNOMIA at SOCINFO2020: Challenging Misinformation; Exploring Limits and Approaches

December 22nd, 2020 by

EUNOMIA project joined forces with H2020 project Co-Inform delivering together the workshop “Challenging Misinformation: Exploring Limits and Approaches” at the Social Informatics Conference 2020 (SocInfo2020) on 6th October 2020.

Pinelopi Troullinou (Trilateral Research) and Diotima Bertel (SYNYO) from EUNOMIA project invited researchers and practitioners to reflect on the existing approaches and the limitations of current socio-technical solutions to tackle misinformation. The objective of the workshop was to bring together stakeholders from diverse backgrounds to develop collaborations and synergies towards the common goal of social media users’ empowerment.

Four papers were presented at the workshop; Gautam Kishore Shahi from the University of Duisburg-Essen in Germany discussed the different conspiracy theories related to COVID-19 spread in the web and the challenges of their correction. Furthermore, he delivered a second presentation from his team regarding the impact of fact-checking integrity on public trust. Markus Reiter-Haas from Graz University of Technology and Beate Klosh from the University of Graz, Austria, discussed polarisation in public opinion across different topics of misinformation. Lastly, Alicia Bargar and Amruta Deshpande explored the issue of affordances across different platforms and how this corresponds to different types of vulnerability to misinformation.

The second part of the workshop included a hands-on activity allowing for deeper discussions. A scenario was presented to the participants according to which citizens, journalists and policymakers needed support to distinguish fact from fiction in the context of COVID-19 “infodemic”. Following, they were invited to reflect on the existing best tools and identify their limits. The discussion showed that participants generally referred to two types of tools. Tools that assist users assessing information trustworthiness based on specific characteristics, or that direct them to trustworthy sources, or that provide information cascade (mainly image or film) were brought forward. At the same time, the benefits of tools that enable social media users to think before they share encouraging them to critically engage with information were discussed. The limits of these tools focused on the automation technologies used. Furthermore, it was noted that such tools can still be complex for the average social media users and demand a level of digital literacy.

The last part of the workshop was dedicated to synergies and collaborations among the participants. Potential research project ideas were discussed. Participants also welcomed the invitation to contribute to the EUNOMIA’s edited volume. The book will focus on issues around human and societal factors of misinformation and approaches and limitations of sociotechnical solutions.

Fighting and coping with misinformation in pandemic crises; COVINFORM Project kicks off featuring two EUNOMIA partners

December 22nd, 2020 by

COVID-19 has been categorised as an infodemic by WHO. It is the first pandemic where social media has been used on such a wide scale to both share protective information and also false information, including misinformation and disinformation. Those groups that are recognised as most vulnerable to COVID-19 may also be most vulnerable to believing and engaging with misinformation (Vijaykumar, 2020). As responses to COVID-19 misinformation has resulted in injuries and fatalities, it is important to address this (Coleman, 2020).

November 2020 saw the EC funded COVINFORM project (Grant Agreement No. 101016247) kick off. The three-year project focuses on analysing and understanding the impact of COVID-19 responses on vulnerable and marginalised groups. COVINFORM features two EUNOMIA partners, Trilateral Research and SYNYO, who will draw on their expertise and knowledge gained during the EUNOMIA project to develop guidance and recommendations for designing effective COVID-19 communication and combating misinformation.

In response to this challenge, WP7 of the COVINFORM project focuses on inclusive COVID-19 communication for behaviour change and misinformation. It will conduct an in-depth analysis of malinformation, disinformation and misinformation to identity insights on how misinformation might affect different groups differently and produce recommendations to fight and cope with misinformation during COVID-19 and future pandemic crises.

For further information, please visit the project website (https://www.covinform.eu) or follow the project on Twitter (https://twitter.com/COVINFORM_EU).                                                                                                                                        

COVINFORM is one of 23 new research projects funded by the European Commission with a total of €128 million to address the continuing coronavirus pandemic and its effects. The press release covering the project’s launch is available here.

EUNOMIA at the Industry Forum of GlobeCom 2020

December 22nd, 2020 by

In the era of COVID-19 pandemic, social media have become a dominant, direct and highly effective form of news generation and sharing at a global scale. This information is not always trustworthy as exemplified by the wide spread of misinformation that proved dangerous for public health. Prof. Charalampos Patrikakis from the University of West Attica -partner of EUNOMIA project- co-organised an event focusing on “Fighting Misinformation on Social Networks” at the Industry Forum session of the Global Communications Conference 2020. GlobeCom2020 is one of the IEEE Communications Society’s two flagship conferences dedicated to driving innovation in nearly every aspect of communications.

The event included presentations by academics and industry representatives followed by an open discussion. Prof. Patrikakis delivered a presentation on “EUNOMIA project: a decentralized approach to fighting fake news”. His presentation referred to the concept of EUNOMIA on the adaptation of information hygiene routines for protection against the ‘infodemic’ of rapidly spreading misinformation. Moreover, EUNOMIA presentation included a more extensive graphic description of the project’s toolkit with its four interrelated functional components: The information cascade, Human-as-Trust-Sensor interface, Sentiment and subjectivity analysis and the Trustworthiness scoring. Participants were also invited to register on EUNOMIA in order to see how this works in real-time.

The pathway to trustworthiness assessment; Sentiment Analysis identification

December 14th, 2020 by

As the amount of content online grows exponentially, new networks and interactions are also growing tremendously fast. EUNOMIA user’s trustworthiness indicators provide a boost towards a fair and balanced social network interaction.

Sentiment analysis is one of EUNOMIA’s trustworthiness indicators assisting users to assess the trustworthiness of online information. It relies on the automatic identification of the sentiment expressed in a user post (negative, positive, or neutral). A sentiment analysis algorithm employs principles from the scientific fields of machine learning and natural language processing. Current trends in the field include AI techniques that outperform traditional dictionary-based approaches and provide unparalleled performance.

Dictionary-based techniques work as follows:  A list of opinion words such as adjectives (i.e. excellent, love, supports, expensive, terrible, hate, complicated), nouns, verbs and word phrases constitute the prior knowledge for extracting the sentiment polarity of a piece of text. For example, in “I love playing basketball” a dictionary-based method would identify and consider the word “love” to infer the positive polarity of the expression.

Figure 1. Sentiment Analysis of user opinions

Unfortunately, these methods are unable to grasp long-range sentiment dependencies, sentiment fluctuations or opinion modifiers (i.e. not so much expensive, less terrible etc.) that exist in abundance in user-generated text.

Figure 2. Demo of how the core of the sentiment analysis component works in EUNOMIA.

We use two models that process user generated content in parallel. The first model relies on sentiment patterns to extract polarity. For example in “not so much expensive” the model would identify the relation between “not” and “expensive” and would assign positive polarity in  comparison to a dictionary-based method that would only rely on the negative word “expensive”.

The second model is an advanced machine learning model, that relies on a trained neural network and it can identify sentiment fluctuations of longer range. Therefore, the first model (pattern-based) relies on sentiment patterns to extract the sentiment orientation, while the second, relies on a neural network that is trained on labeled data and is capable of distinguishing between positive/neutral/negative text with high accuracy.

The output of both models is processed by an ensemble algorithm that decides on the final sentiment classification and the degree that the models are confident about their predictions.

The results of the sentiment analysis process provide one of EUNOMIA’s indicators. Sentiment and emotion in language is connected quite frequently with subjectivity and on many occasions with decietful information. EUNOMIA raises an alert and then the user, by consulting additional meta-information like EUNOMIA’s other indicators can investigate the content further and decide if it is valid and can be safely consumed or shared further to the community.

Pantelis Agathangelou, PhD Candidate, University of Nicosia

The featured photo is by Domingo Alvarez E on Unsplash

Interested in finding out how good you are at telling misinformation in areas you may or may not know much about? Take part in EUNOMIA’s first pilot!

October 6th, 2020 by

Are you confident that you can always determine the trustworthiness of what you read on social media? What if you don’t know much about a topic? Can you still do it? Sign up to the decentralized chapters EUNOMIA social media platform (powered by Mastodon) and try to find out the 10 trustworthy and 10 untrustworthy posts that our mischievous researchers will inject (based on the researchers’ scientific expertise) in a discussion of decentralized technologies between 5th and 14th October. For this very specific period of time and only, your selections will be recorded centrally by us so that we determine who has got the most of these 20 correct by the end.

How to join our competition in two simple steps:

  1. Make an account here
  2. Log in
  3. Click on “Local” and wait a moment for the “I trust this”, “I don’t trust this” buttons to appear (might take a few moments because it’s still early version)
  4. Decide whether you “trust” or you “don’t trust” the posts clicking the respective icon. 

You win points for correctly marking posts as trustworthy or untrustworthy, and lose if you get them wrong. For every correct selection, you will get +1 point and for every incorrect -1 point. For the top scorers, which will be announced the week after, the University of Nicosia has prepared a range of very attractive prizes.

In case of any tie, the winner will be the one who had the lowest average response time in their correct answers. The response time is the time difference from when one of the 20 posts was made by the researchers to the time that the participant correctly selected “I trust this” or “I don’t trust this”.

Some advice you may want to use, not only for this competition, but more widely in social media:

  • Be wary of popular posts. Misinformation travels a lot faster than reliable information
  • Be cautious of information forwarded to you through your network
  • Refrain from sharing based only on headline
  • Be wary of resharing information solely for its high novelty. Misinformation tends to be more novel.
  • Be wary of language that is making you feel emotional. It is designed to become viral, not to inform.
  • Be mindful of your emotions when reading a post. Anger makes you susceptible to partisanship.

Note that this is an early experimental version of EUNOMIA. It will be slow and the “I trust this” buttons and other EUNOMIA functionality may not appear immediately. Bare with it please for a few moments 🙂

You can find further info on UNIC’s programs here 

The Decentralized site can be found here  

Practicing information hygiene routine to flatten the curve of the ‘infodemic’ – EUNOMIA project’s recommendations

August 31st, 2020 by

The Covid-19 outbreak has raised afresh the debate about the dangers of misinformation on social media. During the time of the pandemic, myths about coronavirus cures and treatments, its origins and the reasons behind it were widely spread on social network platforms leading in cases to dangerous and even fatal actions such as bleach consumption. To this end, António Guterres, the Secretary-General of the United Nations, urged for the need to address the ‘infodemic’ of misinformation.

Gaps in information hygiene guidelines

Framing misinformation within such a context places social media users in the centre of this multi-layered social phenomenon and demands a new appropriate approach to address it. To this end, social media users need to adopt what we call ‘information hygiene routines’ to protect themselves and their network against the ‘infodemic’ of rapidly spreading misinformation. We define Information hygiene routine as the practice of evaluating online information so to minimise the risk of consuming and spreading misinformation to one’s network. This practice significantly differs to fact checkers and fake news detection focusing on actively detecting and identifying ‘pathogens’ rather than on daily routine aiming to avoid “infection”.

Information hygiene guidelines such as “check the source of information”, “check whether the account is a bot”, and “flag untrustworthy information for the benefit of others” are regularly recommended by fact checkers, journalists and media literacy experts to help limit the spread of misinformation. No doubt such recommendations are very important but they are often too time-consuming or too difficult and complicated for the users to adopt as part of their everyday routine.

Illusory Truth Effect

European H2020 funded project EUNOMIA addresses this gap by developing tools to assist social media users in practicing information hygiene routines so to flatten the curve of ‘infodemic’. EUNOMIA toolkit cultivates media literacy skills empowering social media users to evaluate themselves the trustworthiness of online information. While trustworthiness is related to truthfulness, these concepts differ significantly. People do not always seek to verify whether online information is true or not. In some cases, the verification process can be very complex and difficult. Trustworthiness in this sense can be considered more important when consuming and spreading information. In fact, a person is inclined to perceive information as trustworthy and credible just because they are very familiar with it. This is what is called “illusory truth effect” in social psychology. Trustworthiness then is a subjective quality and, therefore, is in the eyes of the beholder. To this end, tools supporting the individual evaluation of trustworthiness are key to slow down the spread of misinformation and minimise its risks.

EUNOMIA project’s approach

EUNOMIA is adopting a positive-first approach to the information trustworthiness challenge in social media which empowers users to critically assess the information they consume and protect their network against misinformation spread. EUNOMIA provides a toolkit in the form of a social media companion that can currently be implemented in decentralised and open access social media platforms such as Mastodon and Diaspora*. The social media companion offers multiple trustworthiness indicators for users to select and display their preferred ones to support their assessment. This may include indicators of bot activity, such as the ratio of followers to following, and other indicators co-developed with social media users themselves or identified in the scientific literature such as the objectivity of a post. EUNOMIA also visualises the modifications of online information in between different users’ posts in an information cascade. This means that EUNOMIA users can see how a piece of information might have changed when shared or re-shared by different users and/or in different periods of time. So, the user can see all the different versions of the same piece of information and the ‘journey’ of potential modification conducted.

EUNOMIA encourages the active and collective participation of social media users to stop the spread of misinformation. Adopting user contribution guidelines, such as the recommendation to ‘flag untrustworthy information for the benefit of others’, EUNOMIA enables users to vote on content trustworthiness and act as trust-reference in their network. The number of votes constitutes one of the several trustworthiness indicators that might be used by other users to evaluate the information trustworthiness.

EUNOMIA project’s recommendations

EUNOMIA has developed the first systematic set of information hygiene recommendations that fall in four categories:

a) source of information

b) content

c) language

d) action to mitigate risk.

This set emerged from thorough desk-based research leading in identification and analysis of a large number of guidelines available online. These guidelines were then evaluated based on their practicality and evidence of their effectiveness. The identification and evaluation of the guidelines was conducted by an interdisciplinary team of EUNOMIA researchers assessing their practicality in terms of expertise and time required by the users to adopt. Similarly, the effectiveness of the guidelines was based on scientific evidence. The set of recommendations resulted – such as “Check whether the author is anonymous”, “check whether the language is used to make you emotional”- will be tested with end-users and inform the further development of EUNOMIA toolkit.

Disclaimer: This post was first published on Trilateral Research website

EUNOMIA’s PIA+ & user-engagement workshop, Vienna-February 2020

June 30th, 2020 by

EUNOMIA held a Privacy Impact Assessment + (PIA+) and user-engagement workshop in Vienna (12th February 2020) as part of our co-design activities. That means putting the user always in the centre of the toolkit’s development. Therefore, in the workshop participants had the chance to use and experience the first EUNOMIA prototype. Through hands-on sessions, the aim was to explore users’ insights on the toolkit, and understand their needs and concerns feeding into its further development.

The end-users’ panel included three average social media users and four traditional media journalists. There were also invited external experts to deepen the discussions, including a senior researcher in applied ethics, a senior academic in surveillance, and a software developer and product manager expert.

EUNOMIA aims to develop a decentralised toolkit for assisting social media users to practice information hygiene routine and protect their network against misinformation.  

The workshop ran for a full day and was designed following the principles of co-design method, ensuring the user-centric approach of EUNOMIA. The first session included activities such as quizzes around misinformation and the challenges of recognising false news on social media. There were vivid discussions around the indicators of trustworthiness: participants ranked which of them consider the most important and why. These discussions confirmed the prior results stemming from desk-based research, interviews and surveys and will feed into the development of further indicators in the toolkit.  

In the second session, the first prototype of the EUNOMIA toolkit was introduced and participants could already sign up and use it during the workshop. The participants discussed the EUNOMIA toolkit and the different features included. They were asked of how/if they would make use of it and what extra features they would like to see.The participants welcomed the EUNOMIA tools underlying the potential good use of content trustworthiness vote and providing valuable insights on the further development of a user-friendlier interface. The direct engagement of consortium partners with the participants was very fruitful as they could directly discuss the needs of social media users and how the toolkit can be improved.

EUNOMIA adopts a privacy-first approach and for this reason the workshop dedicated a long session to identify potential ethical and societal concerns. The workshop participants were firstly introduced to the ethical, privacy, social and legal impact assessment method (PIA+) that runs throughout the project’s lifecycle. Following, through vignettes (written scenarios) that stemmed from the analysis of user needs and requirements, participants discussed on potential risks of EUNOMIA’s implementation along with the societal benefit.

The workshop proved to be successful with the interaction between participants, experts and consortium partners generating important recommendations to ensure privacy-by-design and sustainable tools valuable for the social media users.

Pinelopi Troullinou, Research Analyst at Trilateral Research

Special Issue on Misinformation; Call for Papers

June 29th, 2020 by

EUNOMIA’s Ioannis Katakis along with Karl Aberer (EPFL), Quoc Viet Hung Nguyen (Griffith University) and Hongzhi Yin (The University of Queensland) are editing a special issue on misinformation on the web that will be published in the Journal of Information Systems, one of the top-tier journals in Databases and Data-Driven Applications. This special issue seeks high-quality and original papers that advance the concepts, methods, and theories of misinformation detection as well as address the mechanisms, strategies and techniques for misinformation interventions. Topics include:

  • Fake news, social bots, misinformation, and disinformation on social data
  • Misinformation, opinion dynamics and polarization in social data
  • Online misbehavior (scams, deception, and click-bait) and its relation to misinformation
  • Information/Misinformation diffusion
  • Credibility and reputation of news sources, social data, and crowdsourced data

and many more.


The timeline of the special issue is the following:

Submission: 1st August 2020

First Round Notification: 1st October 2020

First Round Revisions: 1st December 2020

Second Round Notification: 1st February 2021

Final Submission: 1st March 2021

Publication: second quarter, 2021Please find more information or submit your paper here:
https://www.journals.elsevier.com/information-systems/call-for-papers/special-issue-on-misinformation-on-the-web
GL

Assessing the Ethical, Legal and Social Impacts of new technologies: the EUNOMIA project case study

June 12th, 2020 by


There is increasing hype about Artificial Intelligence (AI) and machine learning progressively being integrated into more domains of our personal, social and professional life; dating applications “choose” the “right” match, computational tools are employed to increase productivity.

However, at the same time, recent scandals such as Cambridge Analytica’s collection and processing of personal data of millions of Facebook users for political purposes and Amazon’s retraction of an AI recruitment tool showing bias against women have given rise to public debates over technology’s flaws.

The employment of Privacy Impact Assessments (PIA) to ensure a privacy-by-design approach is an essential process to prevent, and therefore effectively respond to, these flaws and concerns.

The PIA is not only guaranteeing the compliance to the EU General Data Protection Regulation (GDPR) but allows to build privacy preserving measures into the system, so that privacy is not limited to data protection but also covers further potential harms on a personal and societal level.

It is considered very important, if not mandatory, for research projects and services that deal with personal data. The introduction of the GDPR in May 2018 continues to be a strong incentive for companies, agencies and institutions to consider their obligations regarding the protection of personal data and privacy.

At Trilateral, we have carried out pioneering work in safeguarding privacy within the private and public sectors, one of our current focuses in the European Union funded EUNOMIA (user-oriented, secure, trustful & decentralised social media) project. This three-year project brings together ten partners who will develop a decentralised, open-source solution to assist social media users (traditional media journalists, social journalists and citizen users) in determining the trustworthiness of information.

As the collection and processing of personal data are necessary for the development and operation of the EUNOMIA solution, we are leading a task to undertake a PIA+. The PIA+ is being undertaken from the very early stages of the project to safeguard privacy and data protection and to minimise potential risks considering societal and legal issues as well as ethical.

What is a PIA+?

A PIA+ is not another “tick box” exercise to prove compliance with relevant laws and regulations. It is a collaborative, continuous process that spans throughout the lifecycle of a project from the early design stage to the deployment of the product or service. The PIA+ process is not a one-size-fits-all. It is a process that is adjusted to the specific needs of each project as it evolves.

It analyses the system architecture and intended information flows to identify potential privacy, social and ethical risks. Based on the analysis, fictional scenarios are developed anticipating the identified risks to be used as stimuli for the consultation with relevant stakeholders (e.g., technical experts, citizens, lawyers, etc.).

The PIA+ results in specific organisational and technical recommendations and measures, anticipating future consequences on an individual, organisational and societal level.

A preliminary PIA+ on EUNOMIA

The PIA+ that we are developing for EUNOMIA runs from the beginning of the project, consulting the technical partners on the development of the tools. The preliminary analysis explored the envisioned system and tools as they are described in the proposal exploring emerging ethical, social and legal issues. The considerations that emerged from this analysis include:

Ethical Considerations

  • Autonomy (e.g., Do EUNOMIA users freely decide upon their participation – do they have the choice of withdrawing?)
  • Dignity (e.g., How could users’ trustworthiness scoring impact the users’ reputation and thus their associations – with society, their organisation, future employers, etc.)?
  • Privacy (e.g., How could users’ trustworthiness scoring impact on the disclosure of information that users do not wish to share and that is linked to their identity?)

Social Considerations

  • Discrimination (e.g., Can the system be misused resulting in EUNOMIA’s users being discriminated?)
  • Bias (e.g., Do the input data and/or data processing carry/create any form of bias?)

Legal Considerations

  • Compliance with the GDPR and relevant ISO standards (e.g., Are data collected, processed and stored lawfully? Can data subjects exercise their rights?)

Recommendations for EUNOMIA

The findings of the preliminary analysis resulted in detailed recommendations for the technical partners to minimise the identified risks. The recommendations suggest measures regarding the data collection, processing and storage including informed consent and pseudonymisation amongst others. We have also proposed design measures for the EUNOMIA solution to address the identified ethical, social and legal concerns.

“Thanks to Trilateral’s Privacy Impact Assessment (PIA+) workshop, technical project partners were able to take a measured view on how data collection, workflows, analysis and security would be managed within the project. The workshop improved our thinking on a very important aspect of the project, enabling the technical teams to think more carefully about the wider system of interest for user and technical requirements.”

Research Partner, EUNOMIA project, May 2019

Next steps in the PIA+ process

The preliminary PIA+ analysis of the EUNOMIA solution and the resulting findings will be discussed with all the project partners during the next consortium meeting. To engage all partners in the PIA+ process and to raise awareness of the ethical, social and legal concerns a hands-on session will be designed. Additionally, as the PIA+ is a continuous process, one-to-one interviews and/or focus groups with different partners and stakeholders will be scheduled as necessary resulting in further reports and consultation at key stages of the project.

Disclaimer: This post was first published on Trilateral Research website