Implicit trustworthiness assessment based on users’ reactions claims

September 15th, 2020 by

Online textual information has been increased tremendously over the years, leading to the demand of information verification. As a result, Natural Language Processing (NLP) research on tasks such as stance detection (Derczynski et al., 2017) and fact verification (Thorne et al., 2018) is gaining momentum, as an attempt to automatically identify misinformation over the social networks (e.g., Mastodon and Twitter).

To that end, within the scope of EUNOMIA a stance classification model was trained, which involves identifying the attitude of EUNOMIA-consent Mastodon users towards the truthfulness of the rumour they are discussing. In particular, transfer learning was applied to fine tune the RoBERTa (Robustly optimized BERT) model (Liu et al., 2019) using the public available dataset SemEval 2019 Subtask 7A (Gorrell et al., 2019). This dataset contains Twitter threads and each tweet (e.g., Hostage-taker in supermarket siege killed, reports say. #ParisAttacks –LINK) in the tree-structured thread is categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to (e.g., I’ve heard that also).
  • Deny: the author of the response denies the veracity of the rumour they are responding to (e.g., That’s a lie).
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to (e.g., Really?).
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to (e.g., True tragedy).

Our model achieved 85.1% accuracy and 62.75 % F1-score macro. Due to the fact that this dataset includes posts using arbitrary ways of language (e.g., OMG that aint right ) the obtained scores are not spectacular, but even so, our approach surpasses the state-of-the-art results (i.e., 81.79% accuracy and 61.87% F1-score) for this dataset  (Yang et al., 2019).

The service has been containerized and will be soon integrated with the rest of the EUNOMIA platform as another useful trustworthiness indicator for the users.

References

Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. SemEval@ACL.

Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., & Zubiaga, A. (2019). SemEval-2019 Task 7: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of SemEval. ACL.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.

Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. ArXiv, abs/1803.05355.

Yang, R., Xie, W., Liu, C., & Yu, D. (2019). BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation. SemEval@NAACL-HLT.

Me, myself and I: Filter Bubbles & Echo Chambers

September 11th, 2020 by

‘Filter bubbles’ and ‘echo chambers’ are popular terms to describe the phenomenon social scientists call ‘selective exposure’. The theory of selective exposure (Klapper, 1957) in brief states that people tend to select information which are in accord with their existing likes, and consequently avoid information that contradicts their beliefs and values.

Different digital tools, algorithms and behaviours rely on the collection of personal data to filter and/or rank items in the daily information stream, creating ‘filter bubbles’ and ‘echo chambers’. As a consequence, they result in a higher personalisation, but also a decreasing diversity of information. Diversity of information may refer to either source or content. Source diversity means the inclusion of a multitude of information sources by a news outlet, as well as the variety of news outlets consumed by a recipient. Content diversity means the range of topics and perspectives on a given topic (Haim, Graefe, & Brosius, 2018).

Despite describing similar phenomena, ‘filter bubbles’ and ‘echo chambers’ are not the same concept. ‘Echo chambers’, on the one hand, describe the phenomenon of being surrounded by link-minded contacts. This might lead to an amplification or reinforcement of pre-existing beliefs. ‘Filter bubbles’, on the other hand, refer to the algorithmic filtering of information to match a user’s needs (Haim, Graefe, & Brosius, 2018). However, there is no consistency in the use of both terms; for example, Lewandowsky et al (2017) describe ‘echo chambers’ as the space where “most available information conforms to pre-existing attitudes and biases” (p. 359).

Studies have shown that people are more likely to share articles with which they agree (An, Quercia, Cha, Gummadi, & Crowcroft, 2014) and that social media expose the community to a narrower range of information sources, compared to a baseline of information seeking activities. Research has also shown that the diversity of social media communication is significantly lower than the one of interpersonal communication, both on an individual and collective level (Nikolov, Oliveira, Flammini, & Menczer, 2015).

But why do people surround themselves with like-minded contacts, why do they choose information that confirms what they already believe? There are different answers to this question. The theory of cognitive dissonance (Festinger, 1957) explains this phenomenon arguing that individual strives for consistency (or consonance) of their believes, attitudes, knowledge etc. Inconsistencies cause psychological discomfort, which Festinger calls dissonance. Another answer is that surrounding oneself with familiar information helps to cope with or even overcome information overload (Pariser 2011). A third answer refers to the social aspect of social media: because of its sharing mechanisms, discovering information becomes a social endeavour rather than an individual process.

In the context of social media, ‘filter bubbles’ and ‘echo chambers’ therefore allow users to avoid psychological discomfort and information overload and to engage in a social endeavour in the process of information seeking. However, they pose great risks leading to self-reinforcement and reduced information diversity (Haim, Graefe, & Brosius, 2018). The tendency to surround oneself with like-minded opinions might also prevent an engagement with other ideas. This can facilitate confirmation bias and polarisation (Nikolov, Oliveira, Flammini, & Menczer, 2015; Haim, Graefe, & Brosius, 2018).

But it’s not all bad news: ‘echo chambers’ seem to be focused mainly on political discourse (Nikolov, Oliveira, Flammini, & Menczer, 2015) whereas other topic areas are less affected. Furthermore, there are tools that encourage and enable users to seek information beyond their ‘bubble’. The EUNOMIA feature illustrating how information has been spread and changed by different users aims exactly to show how similar information is discussed in different ‘bubbles’.

References

An, J., Quercia, D., & Crowcroft, J. (2013). Fragmented Social Media: A Look into Selective Exposure to Political News. WWW 2013 Companion, May 13–17, 2013, Rio de Janeiro, Brazil. ACM 978-1-4503-2038-2/13/05.

Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.

Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343. https://doi.org/10.1080/21670811.2017.1338145

Klapper, J. T. (1957). What We Know About the Effects of Mass Communication: The Brink of Hope. The Public Opinion Quarterly, 21(4), 453-474. https://doi.org/10.1086/266744

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008

Nikolov, D., Oliveira, D. F., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles. PeerJ Computer Science, 1(38), 1-14. https://doi.org/10.7717/peerj-cs.38

Pariser, E. (2011). The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think. New York: Penguin.

Journalists & Social Media – An impossible Love-Affair? The traditional media journalist’s view

September 11th, 2020 by

Journalists can be often accused of producing and spreading ‘fake news’; usually by those being in charge when the journalists are addressing “disagreeable” topics in relation to them.

In fact, all journalists – and ‘traditional journalists’ in particular – are trying to fight misinformation; especially those working for ’quality-media’ and public-service-media-institutions. Traditional journalists are rather obliged by their professional code of ethics and their institutions to do proper research. They have high ethical standards and rules to avoid introducing and spreading of misinformation as well as of information that can be misunderstood and misinterpreted.


Proper research takes a lot of time: checking facts and figures, and re-checking every bit of information is must in journalism. However, doing research often impedes the fast publication of breaking news, which can be a big competitive disadvantage. Digital sources can therefore often be seen as ideal for quick and successful journalistic research. Can social media be part of this “digital toolkit” for traditional journalists?


Although social media is seen as a medium that makes new trends and topics available in a very fast way, the quality of data and the level of biased and misguided information can be against the requirements of proper journalism. Therefore, traditional journalists might use social media platforms to pick up certain trends, but cannot rely on them as information sources. Information on social media is not always verified as well as the source of information that can be unknown. Furthermore, social media promote the creation of “filter bubbles” meaning that users tend to surround themselves with similar information. At the same time, social media platforms are an important ‘market-place’ to solicit traditional journalism ‘products’ as they reach new generation of readers and across their usual ‘circles’ e.g., beyond country borders. All in all, the increasing consumption of news on social media proves problematic for a traditional journalist.


EUNOMIA is the silver-lining promoting critical media literacy skills of social media users/ readers whereas, for traditional journalism, this solution has the potential of opening a new area of trustworthy information exchange. For the very first time, the users themselves are enabled to assess and vote the trustworthiness of social media information based on users’ driven indicators. For assessing trustworthiness, the source of information is considered a key indicator. Therefore, EUNOMIA’s toolkit will encourage users to provide sources thus, promoting the work of traditional media institutions as they are still seen as the most reliable source for important and proper information (Eurobarometer, 2017). Furthermore, through EUNOMIA, the traditional journalists will now be able to track each bit of information found in social media down to the very source, and they will be supported by the community in evaluating the quality and trustworthiness of a given information.


EUNOMIA and the University of Nicosia Decentralized Chapters Community

September 8th, 2020 by

The University of Nicosia Decentralized Chapters is a global community consisting of like-minded individuals that share a common passion in the space of blockchain technology and cryptocurrencies. The main objective of the community is the dissemination of blockchain technology awareness and knowledge in several regions around the world. The aim is to address the lack of knowledge in general of the potential benefits of blockchain in terms of growth and innovation, as well as the lack of skills in the area through various activities for a holistic perspective of the technology.

Decentralized Chapter activities include, but are not limited to:

  • Education/Training
  • Speaker series/free lectures
  • Workshops and seminars
  • Support for Startups
  • Volunteer Activities

In order to provide the Decentralized community hands-on real decentralized applications, its members were invited to join and test the Decentralized Community Social network platform built as part of the H2020 project EUNOMIA for online discussion on Blockchain technology and Cryptocurrencies.  Whether to ask questions and learn, or post news, even rumours for which they want the community’s opinion, this is the right place. The intuitive Twitter-like interface of the federated social network Mastodon and the additional EUNOMIA’s toolkit features such as the trust /don’t trust buttons assists members of the Decentralized community to discuss and express their trust/mistrust of the latest information on Blockchain and cryptocurrencies. EUNOMIA’s features and updates such as information provenance supports their evaluation of online information aiming at sustaining the quality of community’s interactions.

Members that have already joined the Decentralized Social Network Platform, have access to:

  • The latest news on Decentralized Chapter’s activities
  • Early access to Decentralized Chapters’ videos and the chance to discuss with presenters
  • Videos and Links related to Blockchain and decentralized technology developments shared and posted by community members
  • Ask technical questions on Blockchain and decentralized ledger technologies and get answers from community experts
  • Access to information on upcoming events, conferences, webinars, learning and educational material in a single place
  • A growing community space for Decentralized professionals and enthusiasts to share and learn about Blockchain technology
  • And last but not least – interaction with the technology itself!

Join the Decentralized community or lead your own Chapter at https://www.decentralized.com/chapters/

Join the EUNOMIA Decentralised community instance on Mastodon at https://decentralized.eunomia.social/

For further information contact dchapters@unic.ac.cy

Empowering the social media user to assess information trustworthiness; Image similarity detection

September 7th, 2020 by

With the overarching objective to assist users in determining the trustworthiness of information in social media using an intermediary-free approach,  EUNOMIA employs a decentralised architecture Mastodon instance and implements AI technology to generate information cascade of the posts to facilitate the discovery and visualisation of the source of information, how information is shared and changed over time to provide users with provenance information when they are determining a post’s trustworthiness. The information cascade is generated not only based on the text content of a post via paraphrase identification using natural language processing (NLP) technique, but also the image content of the post via image verification using computer vision technique. 

Image verification algorithm is implemented with the aim to determine whether a given pair of images are similar or not in terms of images similarity. The advancements in image verification field is in two broad areas: image embedding and metric learning based. In image embedding, a robust and discriminative descriptor is learnt to represent each image as a compact feature vector/embedding. EUNOMIA employs current state-of-the-art feature descriptors generated by existing convolutional neural network (CNN) which learns features on its own. In metric based learning, a distance metric is utilised to learn from CNN-embeddings in an embedding space to effectively measure the similarity of images. Identical images obtain 100% in similarity; similar images gain higher similarity score; different images and some of the adversarial images would have lower similarity score as shown below.

With the implementation of image similarity functionality, EUNOMIA generates the information cascade by considering both text and image information of a social media post. EUNOMIA platform also has the potential to involve in fetching similar look images give a reference image to a EUNOMIA user.

Practicing information hygiene routine to flatten the curve of the ‘infodemic’ – EUNOMIA project’s recommendations

August 31st, 2020 by

The Covid-19 outbreak has raised afresh the debate about the dangers of misinformation on social media. During the time of the pandemic, myths about coronavirus cures and treatments, its origins and the reasons behind it were widely spread on social network platforms leading in cases to dangerous and even fatal actions such as bleach consumption. To this end, António Guterres, the Secretary-General of the United Nations, urged for the need to address the ‘infodemic’ of misinformation.

Gaps in information hygiene guidelines

Framing misinformation within such a context places social media users in the centre of this multi-layered social phenomenon and demands a new appropriate approach to address it. To this end, social media users need to adopt what we call ‘information hygiene routines’ to protect themselves and their network against the ‘infodemic’ of rapidly spreading misinformation. We define Information hygiene routine as the practice of evaluating online information so to minimise the risk of consuming and spreading misinformation to one’s network. This practice significantly differs to fact checkers and fake news detection focusing on actively detecting and identifying ‘pathogens’ rather than on daily routine aiming to avoid “infection”.

Information hygiene guidelines such as “check the source of information”, “check whether the account is a bot”, and “flag untrustworthy information for the benefit of others” are regularly recommended by fact checkers, journalists and media literacy experts to help limit the spread of misinformation. No doubt such recommendations are very important but they are often too time-consuming or too difficult and complicated for the users to adopt as part of their everyday routine.

Illusory Truth Effect

European H2020 funded project EUNOMIA addresses this gap by developing tools to assist social media users in practicing information hygiene routines so to flatten the curve of ‘infodemic’. EUNOMIA toolkit cultivates media literacy skills empowering social media users to evaluate themselves the trustworthiness of online information. While trustworthiness is related to truthfulness, these concepts differ significantly. People do not always seek to verify whether online information is true or not. In some cases, the verification process can be very complex and difficult. Trustworthiness in this sense can be considered more important when consuming and spreading information. In fact, a person is inclined to perceive information as trustworthy and credible just because they are very familiar with it. This is what is called “illusory truth effect” in social psychology. Trustworthiness then is a subjective quality and, therefore, is in the eyes of the beholder. To this end, tools supporting the individual evaluation of trustworthiness are key to slow down the spread of misinformation and minimise its risks.

EUNOMIA project’s approach

EUNOMIA is adopting a positive-first approach to the information trustworthiness challenge in social media which empowers users to critically assess the information they consume and protect their network against misinformation spread. EUNOMIA provides a toolkit in the form of a social media companion that can currently be implemented in decentralised and open access social media platforms such as Mastodon and Diaspora*. The social media companion offers multiple trustworthiness indicators for users to select and display their preferred ones to support their assessment. This may include indicators of bot activity, such as the ratio of followers to following, and other indicators co-developed with social media users themselves or identified in the scientific literature such as the objectivity of a post. EUNOMIA also visualises the modifications of online information in between different users’ posts in an information cascade. This means that EUNOMIA users can see how a piece of information might have changed when shared or re-shared by different users and/or in different periods of time. So, the user can see all the different versions of the same piece of information and the ‘journey’ of potential modification conducted.

EUNOMIA encourages the active and collective participation of social media users to stop the spread of misinformation. Adopting user contribution guidelines, such as the recommendation to ‘flag untrustworthy information for the benefit of others’, EUNOMIA enables users to vote on content trustworthiness and act as trust-reference in their network. The number of votes constitutes one of the several trustworthiness indicators that might be used by other users to evaluate the information trustworthiness.

EUNOMIA project’s recommendations

EUNOMIA has developed the first systematic set of information hygiene recommendations that fall in four categories:

a) source of information

b) content

c) language

d) action to mitigate risk.

This set emerged from thorough desk-based research leading in identification and analysis of a large number of guidelines available online. These guidelines were then evaluated based on their practicality and evidence of their effectiveness. The identification and evaluation of the guidelines was conducted by an interdisciplinary team of EUNOMIA researchers assessing their practicality in terms of expertise and time required by the users to adopt. Similarly, the effectiveness of the guidelines was based on scientific evidence. The set of recommendations resulted – such as “Check whether the author is anonymous”, “check whether the language is used to make you emotional”- will be tested with end-users and inform the further development of EUNOMIA toolkit.

Disclaimer: This post was first published on Trilateral Research website

Why EUNOMIA builds on Mastodon; what is a decentralised social network

July 31st, 2020 by

EUNOMIA is being built on top of Mastodon. Mastodon has some unique qualities that make it a great foundation for research projects and businesses. It is a social network where people can post text messages, images, videos, polls, subscribe to each other to receive those posts in their chronological home feeds, and otherwise interact with each other through replies, favourites, and re-shares of each others’ posts.

But unlike traditional social networks, Mastodon is decentralised. It’s a collection of websites that almost seamlessly integrate with each other. A person who signed up on one of those websites is able to subscribe to and interact with someone who signed up on a totally different website, and there is no one who controls all of them: Each is controlled by a different entity, be it an individual or an organization (let’s call them servers instead of websites from here on). There is no central authority that owns all data or tells people how to use the network or how it should be financed; every participant has full agency over their own participation.

What’s more, Mastodon is free, open-source software. That means that anyone can download its code, inspect it, or modify it to their needs, and more importantly, anyone can run it to create their own place in the network. Because the integration between Mastodon servers is covered by a standard protocol approved by the World Wide Web Consortium, nobody is locked down to always using Mastodon code — anything that implements that same protocol will integrate just as easily. Furthermore, Mastodon puts a lot of weight behind its API (application programming interface). Everything that Mastodon’s default user interface can do, is done through its API, an API fully available to app developers. Developing alternative user interfaces is not only possible, but encouraged.

So, why is Mastodon great for EUNOMIA? A seamless integration of EUNOMIA’s user interface within Mastodon demonstrates these benefits: EUNOMIA has first-class access to a well-documented API that does not lock away any features and will never be paywalled. What’s more, with our own Mastodon server, we have the perfect testing environment of a fully-featured modern social network entirely under our own control.

It’s not me, it’s you: The Third Person Effect

July 16th, 2020 by

A well-documented phenomenon in communication science is the fact that individuals seem to think that others are more susceptible to media effects than themselves. We tend to assume that commercials, misinformation, or other manipulation by and via the media affect ‘the others’ more than ourselves. This is called the Third Person Effect (Davison, 1983).

Because of this, individuals also tend to overestimate the influence of the media on the attitudes and behaviour of others, and underestimate the influence on themselves. Both of these facts can be harmful: when individuals underestimate the influence of misinformation on their own attitudes and behaviour, they may become more susceptible to it. And when they overestimate the effects on others, this might lead them to take action based on their expectation of other people’s reaction to it.

Expertise plays also a relevant role in this process, especially an individual’s own (perceived) expertise as compared to others’:

“In a sense, we are all experts on those subjects that matter to us, in that we have information not available to others people. This information may not be of factual or technical nature; it may have to do with our own experiences, likes, and dislikes. Other people, we reason, do not know what we know. Therefore, they are more likely to be influenced by the media.” (Davison, 1983, p. 9)

Another relevant factor are in-group and out-group effects. Researchers have observed differences between in-group and out-group members (Jang & Kim, 2018), depending on the idea of ‘the others’, e.g. as an individual, an average person, ‘people just like me’ etc. (Conners, 2005). Jang & Kim (2018) also reported a positive relationship between media literacy and third-person perception, stating that media literacy education could minimize the potential harm of false information. The third-person effect is also related to self-enhancing tendencies such as the belief that one has a greater ability to learn from an experience than others, which seem to be cross-cultural (Johansson, 2005).

In the context of misinformation, this is an important issue: if people do not think they are influenced by false information (as opposed to others), “they may develop the false impression that information shared among them, regardless of their actual accuracy, is perceived true.” (Jang & Kim, 2018, S. 299) Previous results from EUNOMIA co-design research showed that misinformation was claimed to be dangerous because it might have great power over the population and influence their acts and/or decisions.

Sources

Conners, J. L. (2005). Understanding the Third-Person Effect. Centre for the Study of Communication and Culture (CSCC).

Davison, W. P. (1983). The Third-Person Effect in Communication. The Public Opinion Quarterly, 47(1), 1-15.

Jang, S., & Kim, J. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295-302.

Johansson, B. (2005). The Third-Person Effect. Only a Media Perception? Nordicom Review, 26(1), 81-94.

EUNOMIA’s PIA+ & user-engagement workshop, Vienna-February 2020

June 30th, 2020 by

EUNOMIA held a Privacy Impact Assessment + (PIA+) and user-engagement workshop in Vienna (12th February 2020) as part of our co-design activities. That means putting the user always in the centre of the toolkit’s development. Therefore, in the workshop participants had the chance to use and experience the first EUNOMIA prototype. Through hands-on sessions, the aim was to explore users’ insights on the toolkit, and understand their needs and concerns feeding into its further development.

The end-users’ panel included three average social media users and four traditional media journalists. There were also invited external experts to deepen the discussions, including a senior researcher in applied ethics, a senior academic in surveillance, and a software developer and product manager expert.

EUNOMIA aims to develop a decentralised toolkit for assisting social media users to practice information hygiene routine and protect their network against misinformation.  

The workshop ran for a full day and was designed following the principles of co-design method, ensuring the user-centric approach of EUNOMIA. The first session included activities such as quizzes around misinformation and the challenges of recognising false news on social media. There were vivid discussions around the indicators of trustworthiness: participants ranked which of them consider the most important and why. These discussions confirmed the prior results stemming from desk-based research, interviews and surveys and will feed into the development of further indicators in the toolkit.  

In the second session, the first prototype of the EUNOMIA toolkit was introduced and participants could already sign up and use it during the workshop. The participants discussed the EUNOMIA toolkit and the different features included. They were asked of how/if they would make use of it and what extra features they would like to see.The participants welcomed the EUNOMIA tools underlying the potential good use of content trustworthiness vote and providing valuable insights on the further development of a user-friendlier interface. The direct engagement of consortium partners with the participants was very fruitful as they could directly discuss the needs of social media users and how the toolkit can be improved.

EUNOMIA adopts a privacy-first approach and for this reason the workshop dedicated a long session to identify potential ethical and societal concerns. The workshop participants were firstly introduced to the ethical, privacy, social and legal impact assessment method (PIA+) that runs throughout the project’s lifecycle. Following, through vignettes (written scenarios) that stemmed from the analysis of user needs and requirements, participants discussed on potential risks of EUNOMIA’s implementation along with the societal benefit.

The workshop proved to be successful with the interaction between participants, experts and consortium partners generating important recommendations to ensure privacy-by-design and sustainable tools valuable for the social media users.

Pinelopi Troullinou, Research Analyst at Trilateral Research

Understanding the behaviour of social media users

June 29th, 2020 by

Fill in the EUNOMIA survey on social media user behaviour and help us to develop further the EUNOMIA toolkit assisting users in practicing information hygiene.

Social media users adopt different norms of behaviour, spoken and unspoken rules, patterns and forms of communication in different social media platforms. How misinformation spreads via different channels is, at least partially, connected to such platform-internal norms and logics: different types of users ascribe trustworthiness in a different way, they differ in the way they seek and share information.

EUNOMIA is developing a set of tools that aims to support users in assessing the trustworthiness of indicators, offering them a variety of indicators. To understand (norms of) user behaviour is crucial: EUNOMIA follows a co-design approach that puts the user in the centre. Therefore, we are inviting every social media user, as well as traditional media journalists and social journalists, i.e. professional and citizen journalists that carry out journalism on online media, to fill out our survey and support this relevant task.

It will only take 15 minutes of your time!

Link to the survey: https://www.surveyly.com/p/index.php/687178?lang=en