Interested in finding out how good you are at telling misinformation in areas you may or may not know much about? Take part in EUNOMIA’s first pilot!

October 6th, 2020 by

Are you confident that you can always determine the trustworthiness of what you read on social media? What if you don’t know much about a topic? Can you still do it? Sign up to the decentralized chapters EUNOMIA social media platform (powered by Mastodon) and try to find out the 10 trustworthy and 10 untrustworthy posts that our mischievous researchers will inject (based on the researchers’ scientific expertise) in a discussion of decentralized technologies between 5th and 14th October. For this very specific period of time and only, your selections will be recorded centrally by us so that we determine who has got the most of these 20 correct by the end.

How to join our competition in two simple steps:

  1. Make an account here
  2. Log in
  3. Click on “Local” and wait a moment for the “I trust this”, “I don’t trust this” buttons to appear (might take a few moments because it’s still early version)
  4. Decide whether you “trust” or you “don’t trust” the posts clicking the respective icon. 

You win points for correctly marking posts as trustworthy or untrustworthy, and lose if you get them wrong. For every correct selection, you will get +1 point and for every incorrect -1 point. For the top scorers, which will be announced the week after, the University of Nicosia has prepared a range of very attractive prizes.

In case of any tie, the winner will be the one who had the lowest average response time in their correct answers. The response time is the time difference from when one of the 20 posts was made by the researchers to the time that the participant correctly selected “I trust this” or “I don’t trust this”.

Some advice you may want to use, not only for this competition, but more widely in social media:

  • Be wary of popular posts. Misinformation travels a lot faster than reliable information
  • Be cautious of information forwarded to you through your network
  • Refrain from sharing based only on headline
  • Be wary of resharing information solely for its high novelty. Misinformation tends to be more novel.
  • Be wary of language that is making you feel emotional. It is designed to become viral, not to inform.
  • Be mindful of your emotions when reading a post. Anger makes you susceptible to partisanship.

Note that this is an early experimental version of EUNOMIA. It will be slow and the “I trust this” buttons and other EUNOMIA functionality may not appear immediately. Bare with it please for a few moments 🙂

You can find further info on UNIC’s programs here 

The Decentralized site can be found here  

Practicing information hygiene routine to flatten the curve of the ‘infodemic’ – EUNOMIA project’s recommendations

August 31st, 2020 by

The Covid-19 outbreak has raised afresh the debate about the dangers of misinformation on social media. During the time of the pandemic, myths about coronavirus cures and treatments, its origins and the reasons behind it were widely spread on social network platforms leading in cases to dangerous and even fatal actions such as bleach consumption. To this end, António Guterres, the Secretary-General of the United Nations, urged for the need to address the ‘infodemic’ of misinformation.

Gaps in information hygiene guidelines

Framing misinformation within such a context places social media users in the centre of this multi-layered social phenomenon and demands a new appropriate approach to address it. To this end, social media users need to adopt what we call ‘information hygiene routines’ to protect themselves and their network against the ‘infodemic’ of rapidly spreading misinformation. We define Information hygiene routine as the practice of evaluating online information so to minimise the risk of consuming and spreading misinformation to one’s network. This practice significantly differs to fact checkers and fake news detection focusing on actively detecting and identifying ‘pathogens’ rather than on daily routine aiming to avoid “infection”.

Information hygiene guidelines such as “check the source of information”, “check whether the account is a bot”, and “flag untrustworthy information for the benefit of others” are regularly recommended by fact checkers, journalists and media literacy experts to help limit the spread of misinformation. No doubt such recommendations are very important but they are often too time-consuming or too difficult and complicated for the users to adopt as part of their everyday routine.

Illusory Truth Effect

European H2020 funded project EUNOMIA addresses this gap by developing tools to assist social media users in practicing information hygiene routines so to flatten the curve of ‘infodemic’. EUNOMIA toolkit cultivates media literacy skills empowering social media users to evaluate themselves the trustworthiness of online information. While trustworthiness is related to truthfulness, these concepts differ significantly. People do not always seek to verify whether online information is true or not. In some cases, the verification process can be very complex and difficult. Trustworthiness in this sense can be considered more important when consuming and spreading information. In fact, a person is inclined to perceive information as trustworthy and credible just because they are very familiar with it. This is what is called “illusory truth effect” in social psychology. Trustworthiness then is a subjective quality and, therefore, is in the eyes of the beholder. To this end, tools supporting the individual evaluation of trustworthiness are key to slow down the spread of misinformation and minimise its risks.

EUNOMIA project’s approach

EUNOMIA is adopting a positive-first approach to the information trustworthiness challenge in social media which empowers users to critically assess the information they consume and protect their network against misinformation spread. EUNOMIA provides a toolkit in the form of a social media companion that can currently be implemented in decentralised and open access social media platforms such as Mastodon and Diaspora*. The social media companion offers multiple trustworthiness indicators for users to select and display their preferred ones to support their assessment. This may include indicators of bot activity, such as the ratio of followers to following, and other indicators co-developed with social media users themselves or identified in the scientific literature such as the objectivity of a post. EUNOMIA also visualises the modifications of online information in between different users’ posts in an information cascade. This means that EUNOMIA users can see how a piece of information might have changed when shared or re-shared by different users and/or in different periods of time. So, the user can see all the different versions of the same piece of information and the ‘journey’ of potential modification conducted.

EUNOMIA encourages the active and collective participation of social media users to stop the spread of misinformation. Adopting user contribution guidelines, such as the recommendation to ‘flag untrustworthy information for the benefit of others’, EUNOMIA enables users to vote on content trustworthiness and act as trust-reference in their network. The number of votes constitutes one of the several trustworthiness indicators that might be used by other users to evaluate the information trustworthiness.

EUNOMIA project’s recommendations

EUNOMIA has developed the first systematic set of information hygiene recommendations that fall in four categories:

a) source of information

b) content

c) language

d) action to mitigate risk.

This set emerged from thorough desk-based research leading in identification and analysis of a large number of guidelines available online. These guidelines were then evaluated based on their practicality and evidence of their effectiveness. The identification and evaluation of the guidelines was conducted by an interdisciplinary team of EUNOMIA researchers assessing their practicality in terms of expertise and time required by the users to adopt. Similarly, the effectiveness of the guidelines was based on scientific evidence. The set of recommendations resulted – such as “Check whether the author is anonymous”, “check whether the language is used to make you emotional”- will be tested with end-users and inform the further development of EUNOMIA toolkit.

Disclaimer: This post was first published on Trilateral Research website

EUNOMIA’s PIA+ & user-engagement workshop, Vienna-February 2020

June 30th, 2020 by

EUNOMIA held a Privacy Impact Assessment + (PIA+) and user-engagement workshop in Vienna (12th February 2020) as part of our co-design activities. That means putting the user always in the centre of the toolkit’s development. Therefore, in the workshop participants had the chance to use and experience the first EUNOMIA prototype. Through hands-on sessions, the aim was to explore users’ insights on the toolkit, and understand their needs and concerns feeding into its further development.

The end-users’ panel included three average social media users and four traditional media journalists. There were also invited external experts to deepen the discussions, including a senior researcher in applied ethics, a senior academic in surveillance, and a software developer and product manager expert.

EUNOMIA aims to develop a decentralised toolkit for assisting social media users to practice information hygiene routine and protect their network against misinformation.  

The workshop ran for a full day and was designed following the principles of co-design method, ensuring the user-centric approach of EUNOMIA. The first session included activities such as quizzes around misinformation and the challenges of recognising false news on social media. There were vivid discussions around the indicators of trustworthiness: participants ranked which of them consider the most important and why. These discussions confirmed the prior results stemming from desk-based research, interviews and surveys and will feed into the development of further indicators in the toolkit.  

In the second session, the first prototype of the EUNOMIA toolkit was introduced and participants could already sign up and use it during the workshop. The participants discussed the EUNOMIA toolkit and the different features included. They were asked of how/if they would make use of it and what extra features they would like to see.The participants welcomed the EUNOMIA tools underlying the potential good use of content trustworthiness vote and providing valuable insights on the further development of a user-friendlier interface. The direct engagement of consortium partners with the participants was very fruitful as they could directly discuss the needs of social media users and how the toolkit can be improved.

EUNOMIA adopts a privacy-first approach and for this reason the workshop dedicated a long session to identify potential ethical and societal concerns. The workshop participants were firstly introduced to the ethical, privacy, social and legal impact assessment method (PIA+) that runs throughout the project’s lifecycle. Following, through vignettes (written scenarios) that stemmed from the analysis of user needs and requirements, participants discussed on potential risks of EUNOMIA’s implementation along with the societal benefit.

The workshop proved to be successful with the interaction between participants, experts and consortium partners generating important recommendations to ensure privacy-by-design and sustainable tools valuable for the social media users.

Pinelopi Troullinou, Research Analyst at Trilateral Research

Special Issue on Misinformation; Call for Papers

June 29th, 2020 by

EUNOMIA’s Ioannis Katakis along with Karl Aberer (EPFL), Quoc Viet Hung Nguyen (Griffith University) and Hongzhi Yin (The University of Queensland) are editing a special issue on misinformation on the web that will be published in the Journal of Information Systems, one of the top-tier journals in Databases and Data-Driven Applications. This special issue seeks high-quality and original papers that advance the concepts, methods, and theories of misinformation detection as well as address the mechanisms, strategies and techniques for misinformation interventions. Topics include:

  • Fake news, social bots, misinformation, and disinformation on social data
  • Misinformation, opinion dynamics and polarization in social data
  • Online misbehavior (scams, deception, and click-bait) and its relation to misinformation
  • Information/Misinformation diffusion
  • Credibility and reputation of news sources, social data, and crowdsourced data

and many more.


The timeline of the special issue is the following:

Submission: 1st August 2020

First Round Notification: 1st October 2020

First Round Revisions: 1st December 2020

Second Round Notification: 1st February 2021

Final Submission: 1st March 2021

Publication: second quarter, 2021Please find more information or submit your paper here:
https://www.journals.elsevier.com/information-systems/call-for-papers/special-issue-on-misinformation-on-the-web
GL

Assessing the Ethical, Legal and Social Impacts of new technologies: the EUNOMIA project case study

June 12th, 2020 by


There is increasing hype about Artificial Intelligence (AI) and machine learning progressively being integrated into more domains of our personal, social and professional life; dating applications “choose” the “right” match, computational tools are employed to increase productivity.

However, at the same time, recent scandals such as Cambridge Analytica’s collection and processing of personal data of millions of Facebook users for political purposes and Amazon’s retraction of an AI recruitment tool showing bias against women have given rise to public debates over technology’s flaws.

The employment of Privacy Impact Assessments (PIA) to ensure a privacy-by-design approach is an essential process to prevent, and therefore effectively respond to, these flaws and concerns.

The PIA is not only guaranteeing the compliance to the EU General Data Protection Regulation (GDPR) but allows to build privacy preserving measures into the system, so that privacy is not limited to data protection but also covers further potential harms on a personal and societal level.

It is considered very important, if not mandatory, for research projects and services that deal with personal data. The introduction of the GDPR in May 2018 continues to be a strong incentive for companies, agencies and institutions to consider their obligations regarding the protection of personal data and privacy.

At Trilateral, we have carried out pioneering work in safeguarding privacy within the private and public sectors, one of our current focuses in the European Union funded EUNOMIA (user-oriented, secure, trustful & decentralised social media) project. This three-year project brings together ten partners who will develop a decentralised, open-source solution to assist social media users (traditional media journalists, social journalists and citizen users) in determining the trustworthiness of information.

As the collection and processing of personal data are necessary for the development and operation of the EUNOMIA solution, we are leading a task to undertake a PIA+. The PIA+ is being undertaken from the very early stages of the project to safeguard privacy and data protection and to minimise potential risks considering societal and legal issues as well as ethical.

What is a PIA+?

A PIA+ is not another “tick box” exercise to prove compliance with relevant laws and regulations. It is a collaborative, continuous process that spans throughout the lifecycle of a project from the early design stage to the deployment of the product or service. The PIA+ process is not a one-size-fits-all. It is a process that is adjusted to the specific needs of each project as it evolves.

It analyses the system architecture and intended information flows to identify potential privacy, social and ethical risks. Based on the analysis, fictional scenarios are developed anticipating the identified risks to be used as stimuli for the consultation with relevant stakeholders (e.g., technical experts, citizens, lawyers, etc.).

The PIA+ results in specific organisational and technical recommendations and measures, anticipating future consequences on an individual, organisational and societal level.

A preliminary PIA+ on EUNOMIA

The PIA+ that we are developing for EUNOMIA runs from the beginning of the project, consulting the technical partners on the development of the tools. The preliminary analysis explored the envisioned system and tools as they are described in the proposal exploring emerging ethical, social and legal issues. The considerations that emerged from this analysis include:

Ethical Considerations

  • Autonomy (e.g., Do EUNOMIA users freely decide upon their participation – do they have the choice of withdrawing?)
  • Dignity (e.g., How could users’ trustworthiness scoring impact the users’ reputation and thus their associations – with society, their organisation, future employers, etc.)?
  • Privacy (e.g., How could users’ trustworthiness scoring impact on the disclosure of information that users do not wish to share and that is linked to their identity?)

Social Considerations

  • Discrimination (e.g., Can the system be misused resulting in EUNOMIA’s users being discriminated?)
  • Bias (e.g., Do the input data and/or data processing carry/create any form of bias?)

Legal Considerations

  • Compliance with the GDPR and relevant ISO standards (e.g., Are data collected, processed and stored lawfully? Can data subjects exercise their rights?)

Recommendations for EUNOMIA

The findings of the preliminary analysis resulted in detailed recommendations for the technical partners to minimise the identified risks. The recommendations suggest measures regarding the data collection, processing and storage including informed consent and pseudonymisation amongst others. We have also proposed design measures for the EUNOMIA solution to address the identified ethical, social and legal concerns.

“Thanks to Trilateral’s Privacy Impact Assessment (PIA+) workshop, technical project partners were able to take a measured view on how data collection, workflows, analysis and security would be managed within the project. The workshop improved our thinking on a very important aspect of the project, enabling the technical teams to think more carefully about the wider system of interest for user and technical requirements.”

Research Partner, EUNOMIA project, May 2019

Next steps in the PIA+ process

The preliminary PIA+ analysis of the EUNOMIA solution and the resulting findings will be discussed with all the project partners during the next consortium meeting. To engage all partners in the PIA+ process and to raise awareness of the ethical, social and legal concerns a hands-on session will be designed. Additionally, as the PIA+ is a continuous process, one-to-one interviews and/or focus groups with different partners and stakeholders will be scheduled as necessary resulting in further reports and consultation at key stages of the project.

Disclaimer: This post was first published on Trilateral Research website