EUNOMIA’s PIA+ & user-engagement workshop, Vienna-February 2020

June 30th, 2020 by

EUNOMIA held a Privacy Impact Assessment + (PIA+) and user-engagement workshop in Vienna (12th February 2020) as part of our co-design activities. That means putting the user always in the centre of the toolkit’s development. Therefore, in the workshop participants had the chance to use and experience the first EUNOMIA prototype. Through hands-on sessions, the aim was to explore users’ insights on the toolkit, and understand their needs and concerns feeding into its further development.

The end-users’ panel included three average social media users and four traditional media journalists. There were also invited external experts to deepen the discussions, including a senior researcher in applied ethics, a senior academic in surveillance, and a software developer and product manager expert.

EUNOMIA aims to develop a decentralised toolkit for assisting social media users to practice information hygiene routine and protect their network against misinformation.  

The workshop ran for a full day and was designed following the principles of co-design method, ensuring the user-centric approach of EUNOMIA. The first session included activities such as quizzes around misinformation and the challenges of recognising false news on social media. There were vivid discussions around the indicators of trustworthiness: participants ranked which of them consider the most important and why. These discussions confirmed the prior results stemming from desk-based research, interviews and surveys and will feed into the development of further indicators in the toolkit.  

In the second session, the first prototype of the EUNOMIA toolkit was introduced and participants could already sign up and use it during the workshop. The participants discussed the EUNOMIA toolkit and the different features included. They were asked of how/if they would make use of it and what extra features they would like to see.The participants welcomed the EUNOMIA tools underlying the potential good use of content trustworthiness vote and providing valuable insights on the further development of a user-friendlier interface. The direct engagement of consortium partners with the participants was very fruitful as they could directly discuss the needs of social media users and how the toolkit can be improved.

EUNOMIA adopts a privacy-first approach and for this reason the workshop dedicated a long session to identify potential ethical and societal concerns. The workshop participants were firstly introduced to the ethical, privacy, social and legal impact assessment method (PIA+) that runs throughout the project’s lifecycle. Following, through vignettes (written scenarios) that stemmed from the analysis of user needs and requirements, participants discussed on potential risks of EUNOMIA’s implementation along with the societal benefit.

The workshop proved to be successful with the interaction between participants, experts and consortium partners generating important recommendations to ensure privacy-by-design and sustainable tools valuable for the social media users.

Pinelopi Troullinou, Research Analyst at Trilateral Research

Understanding the behaviour of social media users

June 29th, 2020 by

Fill in the EUNOMIA survey on social media user behaviour and help us to develop further the EUNOMIA toolkit assisting users in practicing information hygiene.

Social media users adopt different norms of behaviour, spoken and unspoken rules, patterns and forms of communication in different social media platforms. How misinformation spreads via different channels is, at least partially, connected to such platform-internal norms and logics: different types of users ascribe trustworthiness in a different way, they differ in the way they seek and share information.

EUNOMIA is developing a set of tools that aims to support users in assessing the trustworthiness of indicators, offering them a variety of indicators. To understand (norms of) user behaviour is crucial: EUNOMIA follows a co-design approach that puts the user in the centre. Therefore, we are inviting every social media user, as well as traditional media journalists and social journalists, i.e. professional and citizen journalists that carry out journalism on online media, to fill out our survey and support this relevant task.

It will only take 15 minutes of your time!

Link to the survey: https://www.surveyly.com/p/index.php/687178?lang=en

Special Issue on Misinformation; Call for Papers

June 29th, 2020 by

EUNOMIA’s Ioannis Katakis along with Karl Aberer (EPFL), Quoc Viet Hung Nguyen (Griffith University) and Hongzhi Yin (The University of Queensland) are editing a special issue on misinformation on the web that will be published in the Journal of Information Systems, one of the top-tier journals in Databases and Data-Driven Applications. This special issue seeks high-quality and original papers that advance the concepts, methods, and theories of misinformation detection as well as address the mechanisms, strategies and techniques for misinformation interventions. Topics include:

  • Fake news, social bots, misinformation, and disinformation on social data
  • Misinformation, opinion dynamics and polarization in social data
  • Online misbehavior (scams, deception, and click-bait) and its relation to misinformation
  • Information/Misinformation diffusion
  • Credibility and reputation of news sources, social data, and crowdsourced data

and many more.


The timeline of the special issue is the following:

Submission: 1st August 2020

First Round Notification: 1st October 2020

First Round Revisions: 1st December 2020

Second Round Notification: 1st February 2021

Final Submission: 1st March 2021

Publication: second quarter, 2021Please find more information or submit your paper here:
https://www.journals.elsevier.com/information-systems/call-for-papers/special-issue-on-misinformation-on-the-web
GL

Challenging Misinformation: Exploring Limits and Approaches

June 23rd, 2020 by

We invite researchers and practitioners interested in aligning social, computational and technological approaches to mitigating the challenge of misinformation in social media to join us at the SocInfo 2020 Workshop!

The last weeks and months of the COVID-19 pandemic have shown quite drastically how access to accurate information is crucial. So many rumours, hoaxes, fake news and other forms of dis- and misinformation have been spread that the term ‘infodemic’ was coined. To counteract this wicked problem, tools and online services are continuously being developed to support different stakeholders. However, limiting the harm caused by misinformation requires merging multiple perspectives, including an understanding of human behaviour in judging and promoting false information, as well as technical solutions to detect and stop its propagation. To this point, all the solutions are notably limited: as a multi-layered problem, there is not a single and comprehensive solution capable of stopping misinformation. Existing approaches are all limited for different reasons: the way end-users are (or not) engaged, limited data sources, subjectivity associated with the judgment, etc. Furthermore, purely technical solutions tend to disregard the social structures that lead to the spread of misinformation, or the different ways societal groups are affected are not only limited, they can also be harmful by obfuscating the problem. Therefore, a comprehensive and interdisciplinary approach is needed.

So, how can we battle misinformation and flatten the curve of the infodemic? Members of the EU funded projects Co-inform and EUNOMIA have joined forces to organise a hands-on workshop at the SocInfo conference on 6 October 2020. Aim of the workshop is to unpack the state-of-the-art on social sciences and technical solutions, challenging the participants to critically reflect upon existing limitations and then co-create a future with integrating perspectives. This workshop intends to bring together data scientists, media specialists, computer scientists, social scientists, designers, and journalists. Participants are invited to discuss challenges and obstacles related to misinformation online from human and technical perspectives; to challenge existing approaches and identify their limitations in technical terms and targeted users, and to co-create future scenarios building on existing solutions.

Find out more

About the workshop, topics of interest, the submission process, and the organisers: http://events.kmi.open.ac.uk/misinformation/

About SocInfo 2020: https://kdd.isti.cnr.it/socinfo2020/

About Co-inform: https://coinform.eu/

Eat, Sleep, Trust, Repeat or The Illusory Truth Effect

June 23rd, 2020 by

Have you ever noticed how a vaguely familiar statement, something you remember hearing, makes you think “there has to be something to it” or “this has to be true, I have heard it before”? This is what social psychology calls the “illusory truth effect”, which means that a person attributes higher credibility and trustworthiness to information to which they have been exposed before. In a nutshell, repetition makes information more credible and trustworthy. Frequency, it seems, serves as a “criterion of certitude”.

What is really interesting about this is that this effect has not only been tested for plausible information but even for statements initially identified as false. And the effect is immediate: already reading a false statement once is enough to increase later perceptions of its accuracy. An experiment carried out by researchers in the US showed the truth effect even for highly implausible, entirely fabricated news stories. The researchers further were able to show that the effect holds true even if participants forgot having seen the information previously. Even if participants disagree with information, repetition made it more plausible.

But why does this effect exist? Research gives us several answers to this. First, familiarity with the information leads to faster processing of that information. Second, recognition of information, and coherency of statements with previous information in a person’s memory, affect our judgement. Third, there is a learned effect that faster processing of information and truthfulness of information can be positively correlated.

When it comes to misinformation and fake news, the importance of this effect cannot be stressed enough. Even warnings by fact-checkers or experts cannot counter it; the only way of protecting yourself and others from misinformation is to avoid sharing any piece of dubious information. So, one of the most important rules we identified as part of our guidelines for information hygiene is simply if in doubt, don’t share.

References

Bacon, F. (1979). Credibility of repeated statements: Memory for trivia. Journal of Experimental Psychology: Human Learning and Memory, 5(3), 241–252.

Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of verbal learning and verbal behavior, 16(1), 107-112.

Pennycook, G., & Rand, D. (2018). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of personality.

Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of experimental psychology: general., 147(12), 1865–1880. doi:https://doi.org/10.1037/xge0000465

Unkelbach, C. (2007). Reversing the Truth Effect: Learning the Interpretation of Processing Fluency in Judgments of Truth. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(1), 2019-230.

Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019). Truth by Repetition: Explanations and Implications. urrent Directions in Psychological Science, 28(3), 247-253.

#Infohygiene: Be wary of language that is making you feel emotional. It is designed to become viral, not to inform

June 20th, 2020 by

For misinformation to have an impact, it needs to go viral. So, it is not surprising that misinformation shares a lot with clickbait and often aims at nothing more than just that. To bait to click.

Several studies have shown that emotionally arousing stories tend to attract audience selection and exposure. There is no doubt that emotionally evocative content is more ‘viral’ than neutral content. The more the anger or anxiety it evokes, the faster and more broadly it spreads. There is also a lot of evidence that emotions impact memories. Emotional memories are vivid and lasting but not necessarily accurate, and under some conditions, emotion even increases people’s susceptibility to false memories.

More directly on misinformation, a study by Brian Weeks of the University of Michigan demonstrated that anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. Another study by Northeastern University involving 5,303 posts with 2,614,374 user comments from popular social media platforms, showed more misinformation-awareness signals and extensive emoji and swear word usage with false posts. Misinformation often uses inflammatory and sensational language to alter people’s emotions.

So, what can one do with this knowledge? A good approach is to make sure you take a moment to think when you encounter highly emotive language. An even better way may be to use EUNOMIA’s information cascade functionality, which visualises the sentiment expressed by all the posts that contain the same information. Language that is highly negative is not by itself a sign of mal-intent, but can be one if combined with other indicators.

References

Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of communication, 65(4), 699-719.

Jiang, S. and Wilson, C. (2018). Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-23.

How to trust a stranger; EUNOMIA’s team latest publication

June 20th, 2020 by

EUNOMIA’s team latest publication on Cutter Business Technology Journal is trying to provide answers on how technology can be used to support our research for trustful information. Can we trust a stranger? So many posts, who can we trust?

As the online world of information enlarges, there is also an increasing demand on trustworthiness of user-generated, online-shared content. While a significant portion of social media posts have proven to be a great source of knowledge and news, other posts have purposefully spread false information.
The article includes a presentation of contemporary decentralized environments that aim at exploiting the potential of end users’ representation as contributors to the credibility-checking process. What technologies would successfully support the implementation of a human-centric solution, to assist social media users in gauging the trustworthiness of information, in a complementary way to the usual practice of “information hygiene” guidelines?
In contrast to centralized platforms, where the data of vast numbers of users is held by, and in some cases subject to the censorship of, a very small number of social network providers, decentralized online social networks are based on distributed information management schemes, empowered by trusted servers or peer-to-peer (P2P) systems.

A new approach, however, involves the user in the process, exploiting the potential of crowdsourcing. In this direction, the research project EUNOMIA addresses the challenges of misinformation in social media, by actively encouraging citizen participation in content verification by voting on content trustworthiness.  The focus is on the users to take ownership of the problem of disinformation, in contrast to the existing technologies used such as third-party fact-checkers or computer software.

The full article can be found here: https://www.cutter.com/article/how-trust-stranger

#Infohygiene: Be wary of popular posts

June 18th, 2020 by

Misinformation travels faster than reliable information. This has been shown time and time again. For example, in their work published in Science two years ago, M.I.T. researchers investigated around 126,000 stories tweeted by around 3 million people and classified news as true or false using information from six independent fact-checking organisations. They showed that falsehood diffused significantly faster than the truth in all categories of information”. Specifically, “it took the truth about six times as long as falsehood to reach 1500 people” and “20 times as long as falsehood to reach a cascade depth of 10”.

In contrast, posts from individuals or organisations that are experts in a topic are not necessarily popular in social media. For example, when researchers analysed the content and source of the most popular tweets for a case of diphtheria in Spain, none of the popular tweets were posted by healthcare organisations. They were mainly opinions from non-experts.

This by no means indicates that a viral post is misinformation just because it is viral, but it certainly is a reason to think twice before reposting.

References
Vosoughi, S., Roy, D. and Aral, S., 2018. The spread of true and false news online. Science, 359(6380), pp.1146-1151.
Porat, T., Garaizar, P., Ferrero, M., Jones, H., Ashworth, M. and Vadillo, M.A., 2019. Content and source analysis of popular tweets following a recent case of diphtheria in Spain. European journal of public health, 29(1), pp.117-122.

#Infohygiene: “Be cautious of information forwarded to you through your network”

June 13th, 2020 by

“Was the information forwarded to you?” is a common recommendation for protecting oneself against misinformation. The rationale is that one needs to question their trust network; more specifically to refrain from letting their guard down just because a piece of news came from a friend. While friends forwarding news may be generally trusted and have no ill intent, this does not mean that they have not themselves been deceived by information that is mal-intentioned and biased.

Kang and Sunder have explained that when reading online news, the closest source is often one of our friends. Because we tend to trust our friends, our cognitive filters weaken, making a social media feed fertile ground for fake news to sneak into our consciousness. Their experiment with 146 participants showed that people are less sceptical of information they encounter on platforms they have personalised through friend requests and “liked” pages, and do not question the credibility of the source of news when they think of their friends as the source.

To this, also add the study comparing the dynamics of the spreading of unverified rumours and scientific news on Facebook by Vicario et al., who observed that most of the times, unverified rumours were taken on by a friend having the same profile, i.e., belonging to the same “echo chamber”.

References
H. Kang and S. S. Sunder. “When Self Is the Source: Effects of Media Customization on Message Processing”, Journal of Media Psychology, 19(4), pp.561-588, 2016.
M. D. Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H. E. Stanley, and W. Quattrociocchi. The spreading of misinformation online, Proceedings of the National Academy of Sciences of United States of America (PNAS), 2016, 113 (3): 554-559.

A data science approach to social science problems: examining political bias in false information on social media

June 12th, 2020 by

The 2016 United States (U.S) presidential election highlighted the powerful influence that social media can have on politics. Fake news stories shared on social media are argued to have swayed the outcome of the last U.S. election and a recent article in the Guardian questioned: “Will fake news wreck the coming [UK] general election?”

With the Conservatives recently spending approximately £100,000 and the Brexit party spending £107,000 on Facebook advertising in the UK, it is becoming increasingly important to understand how false information on social media is politically biased.

As part of their role in the EU funded EUNOMIA (user-oriented, secure, trustful & decentralised social media) project, Trilateral’s research team sought to address this question.

EUNOMIA is a three-year (2018-2021) project that brings together 10 partners who will develop a decentralised, open-source solution to assist social media users (traditional media journalists, social journalists and citizen users) in determining the trustworthiness of information.

An interdisciplinary approach

Trilateral leads the work in EUNOMIA to understand the social and political considerations in the verification of social media information. The research team’s interdisciplinary approach drew on the expertise of social scientists from the Applied Research & Innovation team and data scientists from the technical team.

The social science research comprised of desk-based research and 19 semi-structured interviews with citizen social media users, traditional media journalists, social journalists and other relevant stakeholders.

Participant observation has been undertaken through a data science approach designed to examine political bias in relation to the engagement with false information. To do this the study focused on the political leanings of false information accounts in relation to UK political parties. The data science methodology included:

  • Web scraping to identify the Twitter handles of the 579 UK MPs and 49 false information organisations which included organisations labelled as “conspiracy-pseudoscience” or “questionable” sources on the Mediabiasfactcheck.com database and climate change denial organisations listed on the DeSmog Climate Disinformation Research Database
  • Data mining on Twitter, processing and utilising big data exclusively using the publicly available Twitter APIs
  • Social network analysis to track the following between the examined accounts and extract insights to measure political bias
EUNOMIA fake news

To examine political bias in relation to false information, the technical team analysed three metrics, including:

  1. The intersection of the followers of each false information account and the followers of the Conservative and Labour MPs
  2. A social network graph highlighting the political bias of the followers of each false information account
  3. Whether false information accounts follow more Conservative or Labour MPs

The findings of all three metrics examined indicate that the majority of false information accounts in this study are Conservative-leaning.

Whilst these findings may be the result of the calls by Conservative-leaning politicians to distrust mainstream media or in how accounts are labelled as false information by fact-checkers, the presence of political bias in social media content can result in it being distrusted.

The interview findings highlighted how information and sources that are politically biased or radicalised are not perceived to be trustworthy. As interviewees highlighted how the language used can provide insights into political bias, our further research will draw on natural language processing techniques to explore the language used when engaging with false information accounts.

The findings of this study have been submitted for publication as a journal article.

Disclaimer: This post first published on Trilateral Research website