EUNOMIA’s blockchain engine acts as a trust machine

April 14th, 2021 by

Data integrity is a term that refers to the reliability and assurance (validity) of data. One of the main challenges of social media is how to create assurances of data validity, at the same time safeguarding that the data have not been tampered.

In EUNOMIA, all posts are annotated with a “magic” icon (see the image below). When this special icon appears on a post, it means that its blockchain technology guarantees that the integrity of the data e.g., number of trusts, the number of followers and other information related to the post have not been tampered by any administrator or cyber attacker. However, it does not validate whether the post is trustworthy or not. This is for you to decide!

But why is data integrity so important? In cyberspace, online data are likely to be exposed to data manipulations by malicious actors that could use this to their advantage. Especially in social media such actors are distributing compromised pieces of data to their advantage, either to make others act on such data by spreading misinformation (e.g., propagate rumors) or by leveraging on the credibility of other people to claim authorship on certain pieces of digital work (either in plain text or in some visual form).

From the other hand, the actual content creators of such digital artifacts would like to guarantee data integrity towards their online communities (i.e., consumers) of such data. Lastly, and most importantly, they would like to be ensured that there are safeguards in place preventing such manipulation events from happening by other online users.

But how can one certify and prove that the data being presented are free from any data manipulation? EUNOMIA is using a trust engine which builds on a blockchain backbone to provide integrity assurance of the information stored. In addition, the blockchain layer of EUNOMIA creates audit trails of the information to log all the state changes or revisions made to the data.

But isn’t this the same as logging data in some database? Audit trails stored in the same way as the application data are equally vulnerable to tampering and manipulation attacks. There is no mechanism to make hard assertions about data integrity or validate the integrity of such data. In EUNOMIA, we are using a blockchain backbone to make such assertions that can validate the integrity of arbitrary data, and provide for traceability.

But how is this ensured? Blockchain technology builds on a peer-to-peer network where clients (aka nodes) maintain replicas of a distributed data structure (i.e., the blockchain). This data structure organizes pieces of information in the so-called blocks that are linked together with the use of a cryptographic hash function. In brief, the cryptographic hash function is used to create hashes of the information from each block that are linked with each other; since the hash of a block encapsulates the hash of the previous block as a pointer, thus forming a reversed linked list (aka the chain).

This unique characteristic of this data structure is critical for ensuring data integrity. If an attacker attempts to change any piece of information within a past block then the hash of the block changes, resulting in an unresolved pointer breaking the chain. This unreferenced block will not be accepted by the rest of the network.

In addition to cryptographic hashes that build a unique data structure, a blockchain network employs a safeguard mechanism (so-called consensus) that allows independent nodes to, coordinate, approve, and agree on which data should be appended on the data structure. Since the consensus algorithm operates under a distributed environment this ensures that there is no single point of failure, and no central control of the information that could be compromised.

EUNOMIA’s blockchain backbone leverages on the unique characteristics of: blockchain-based hash validation, and consensus offered by the technology to safeguard data attestations and guarantee the integrity of the data.

Exploring political bias in false information spread on social media

February 24th, 2021 by

Social media has become a key source of online news consumption. However, at the same time, social media users are not passive news consumers. They can further distribute online information to their networks and beyond. It is easy then to understand how information that is not always factual or information that promotes hate and violence can be amplified on social media.

This use of social media has an unimaginable impact on the real world as many recent events have shown. One such example is that of the Capitol deadly attack in January 2021, in which social media had a major role to play. Misinformation regarding the transparency of the election process had been spread through social media generating distrust and anger against the newly elected president. Riots are also said to be organised through groups on social media.

It is evident, that social media and misinformation can be harmful and from a political perspective it can threaten democracy. Through our work in the EUNOMIA project, we adopted an interdisciplinary approach to examine political bias in the engagement with false information.

EUNOMIA, a 3-year EU-funded Innovation project, aims to shift the culture in which we use social media focusing on trust, nudging social media users to prioritise critical engagement with online information before they react to it. To this end, it provides a toolkit that supports social media users to assess information trustworthiness. In developing effective solutions, it is necessary to understand the human and societal factors of misinformation.

Our interdisciplinary approach

Within the project, our interdisciplinary team at Trilateral Research leads the work of understanding the social and political considerations in the verification of social media misinformation, and the findings directly feed into further development of the tools. Our approach involved three key stages:

  • Stage 1 – TRI’s social scientists undertook desk-based research to understand the political challenges associated with verifying social media information. This provided insights on how political affinity can influence engagement with misinformation
  • Stage 2 – 19 interviews were conducted with citizens, traditional media journalists, and social media journalists by social scientists. The interviews highlighted how the language used on social media can indicate political bias. Furthermore, information and sources which are politically biased or radicalised are not perceived to be trustworthy.
  • Stage 3 – Building on the findings from stages 1 and 2, Trilateral’s technical team undertook a social network analysis to gain insights on the role of political bias in the engagement with misinformation on social media.

Stage 3 involved the team examining a network of 579 influential Twitter accounts of UK Members of Parliament and a sample of 49 false information accounts. Using UK politics as a case study, enabled the technical team to contribute to the existing heavily US-focused research.

The analysis was conducted using a step-by-step approach.

Within the UK context, the findings suggest that most of the accounts engaging with false information have a Conservative leaning. This can be explained in two ways:

  • False information can be generated and spread mainly by Conservative-leaning accounts, or
  • There is bias in the way fact-checkers label the false information accounts.

The insights emerging from Trilateral’s interdisciplinary approach can be used in the design and development process of relevant tools for tackling misinformation. It also invites fact-checkers and data scientists to explore potential bias when they label accounts as sources of false information. Furthermore, it contributes to media literacy, raising awareness of social media users regarding trustworthiness assessment and further engagement with online information.

The findings encourage social media users to examine the characteristics of accounts that generate and promote content especially with regard to political bias.

Disclaimer: This post was first published on Trilateral Research website

Klitos Christodoulou EUNOMIA’s partner from UNIC in an interview with Blasting News

January 30th, 2021 by

Klitos Christodoulou, assistant professor at the University of Nicosia, in his interview with Blasting Talks illustrates how EUNOMIA plans to repurpose the idea of a social media platform. Klitos presents the differences between the mainstream social media, like Facebook or Twitter, and blockchain-based social media to explain EUNOMIA’s potential in changing the culture of social networks.

Read the full article here

EUNOMIA’s project coordinator Prof. George Loukas on Blasting Talks

January 10th, 2021 by

Prof. George Loukas, EUNOMIA project coordinator and Head of Internet of Things and Security Research Group, featured on Blasting Talks. He talked about EUNOMIA and the project’s approach in tackling misinformation placing the user in the centre of toolkit’s design and development.

Read the full interview here

EUNOMIA at the Industry Forum of GlobeCom 2020

December 22nd, 2020 by

In the era of COVID-19 pandemic, social media have become a dominant, direct and highly effective form of news generation and sharing at a global scale. This information is not always trustworthy as exemplified by the wide spread of misinformation that proved dangerous for public health. Prof. Charalampos Patrikakis from the University of West Attica -partner of EUNOMIA project- co-organised an event focusing on “Fighting Misinformation on Social Networks” at the Industry Forum session of the Global Communications Conference 2020. GlobeCom2020 is one of the IEEE Communications Society’s two flagship conferences dedicated to driving innovation in nearly every aspect of communications.

The event included presentations by academics and industry representatives followed by an open discussion. Prof. Patrikakis delivered a presentation on “EUNOMIA project: a decentralized approach to fighting fake news”. His presentation referred to the concept of EUNOMIA on the adaptation of information hygiene routines for protection against the ‘infodemic’ of rapidly spreading misinformation. Moreover, EUNOMIA presentation included a more extensive graphic description of the project’s toolkit with its four interrelated functional components: The information cascade, Human-as-Trust-Sensor interface, Sentiment and subjectivity analysis and the Trustworthiness scoring. Participants were also invited to register on EUNOMIA in order to see how this works in real-time.

Pinelopi Troullinou EUNOMIA’s partner from Trilateral Research on Blasting Talks

December 20th, 2020 by

Pinelopi Troullinou, Research Analyst at Trilateral Research, in an interview with Blasting Talks, explains the importance of end-users in the project. Through co-design methods, they provide their needs and preferences feeding into the development of EUNOMIA toolkit. Furthermore, Pinelopi explains that the project adopts a Privacy, Ethical and Social Impact Assessment (PIA+) making sure that it respects ethical and societal values. EUNOMIA aims to shift the social media culture of “like” to “trust” triggering users to reflect when engaging with information online. In this context, EUNOMIA provides the tools to support social media users to adopt an “information hygiene routine” protecting themselves and their network against misinformation.

Read the full interview here

The pathway to trustworthiness assessment; Sentiment Analysis identification

December 14th, 2020 by

As the amount of content online grows exponentially, new networks and interactions are also growing tremendously fast. EUNOMIA user’s trustworthiness indicators provide a boost towards a fair and balanced social network interaction.

Sentiment analysis is one of EUNOMIA’s trustworthiness indicators assisting users to assess the trustworthiness of online information. It relies on the automatic identification of the sentiment expressed in a user post (negative, positive, or neutral). A sentiment analysis algorithm employs principles from the scientific fields of machine learning and natural language processing. Current trends in the field include AI techniques that outperform traditional dictionary-based approaches and provide unparalleled performance.

Dictionary-based techniques work as follows:  A list of opinion words such as adjectives (i.e. excellent, love, supports, expensive, terrible, hate, complicated), nouns, verbs and word phrases constitute the prior knowledge for extracting the sentiment polarity of a piece of text. For example, in “I love playing basketball” a dictionary-based method would identify and consider the word “love” to infer the positive polarity of the expression.

Figure 1. Sentiment Analysis of user opinions

Unfortunately, these methods are unable to grasp long-range sentiment dependencies, sentiment fluctuations or opinion modifiers (i.e. not so much expensive, less terrible etc.) that exist in abundance in user-generated text.

Figure 2. Demo of how the core of the sentiment analysis component works in EUNOMIA.

We use two models that process user generated content in parallel. The first model relies on sentiment patterns to extract polarity. For example in “not so much expensive” the model would identify the relation between “not” and “expensive” and would assign positive polarity in  comparison to a dictionary-based method that would only rely on the negative word “expensive”.

The second model is an advanced machine learning model, that relies on a trained neural network and it can identify sentiment fluctuations of longer range. Therefore, the first model (pattern-based) relies on sentiment patterns to extract the sentiment orientation, while the second, relies on a neural network that is trained on labeled data and is capable of distinguishing between positive/neutral/negative text with high accuracy.

The output of both models is processed by an ensemble algorithm that decides on the final sentiment classification and the degree that the models are confident about their predictions.

The results of the sentiment analysis process provide one of EUNOMIA’s indicators. Sentiment and emotion in language is connected quite frequently with subjectivity and on many occasions with decietful information. EUNOMIA raises an alert and then the user, by consulting additional meta-information like EUNOMIA’s other indicators can investigate the content further and decide if it is valid and can be safely consumed or shared further to the community.

Pantelis Agathangelou, PhD Candidate, University of Nicosia

The featured photo is by Domingo Alvarez E on Unsplash

EUNOMIA’s partner Sorin Adam Matei from SIMAVI in an interview with Blasting Talks

November 29th, 2020 by

Sorin Adam Matei EUNOMIA’s partner representing SIMAVI and professor at Purdue University featured at Blasting Talks. He highlighted the project’s approach encouraging social media users to reflect on their engagement with information online. EUNOMIA does not dictate which information to be trusted or not. Instead, we encourage users to deliberate on online information providing tools to assist this process.

You cab read the full article here

Fighting and coping with misinformation in pandemic crises; COVINFORM Project kicks off featuring two EUNOMIA partners

November 15th, 2020 by

COVID-19 has been categorised as an infodemic by WHO. It is the first pandemic where social media has been used on such a wide scale to both share protective information and also false information, including misinformation and disinformation. Those groups that are recognised as most vulnerable to COVID-19 may also be most vulnerable to believing and engaging with misinformation (Vijaykumar, 2020). As responses to COVID-19 misinformation has resulted in injuries and fatalities, it is important to address this (Coleman, 2020).

November 2020 saw the EC funded COVINFORM project (Grant Agreement No. 101016247) kick off. The three-year project focuses on analysing and understanding the impact of COVID-19 responses on vulnerable and marginalised groups. COVINFORM features two EUNOMIA partners, Trilateral Research and SYNYO, who will draw on their expertise and knowledge gained during the EUNOMIA project to develop guidance and recommendations for designing effective COVID-19 communication and combating misinformation.

In response to this challenge, WP7 of the COVINFORM project focuses on inclusive COVID-19 communication for behaviour change and misinformation. It will conduct an in-depth analysis of malinformation, disinformation and misinformation to identity insights on how misinformation might affect different groups differently and produce recommendations to fight and cope with misinformation during COVID-19 and future pandemic crises.

For further information, please visit the project website (https://www.covinform.eu) or follow the project on Twitter (https://twitter.com/COVINFORM_EU).                                                                                                                                        

COVINFORM is one of 23 new research projects funded by the European Commission with a total of €128 million to address the continuing coronavirus pandemic and its effects. The press release covering the project’s launch is available here.

EUNOMIA at SOCINFO2020: Challenging Misinformation; Exploring Limits and Approaches

October 30th, 2020 by

EUNOMIA project joined forces with H2020 project Co-Inform delivering together the workshop “Challenging Misinformation: Exploring Limits and Approaches” at the Social Informatics Conference 2020 (SocInfo2020) on 6th October 2020.

Pinelopi Troullinou (Trilateral Research) and Diotima Bertel (SYNYO) from EUNOMIA project invited researchers and practitioners to reflect on the existing approaches and the limitations of current socio-technical solutions to tackle misinformation. The objective of the workshop was to bring together stakeholders from diverse backgrounds to develop collaborations and synergies towards the common goal of social media users’ empowerment.

Four papers were presented at the workshop; Gautam Kishore Shahi from the University of Duisburg-Essen in Germany discussed the different conspiracy theories related to COVID-19 spread in the web and the challenges of their correction. Furthermore, he delivered a second presentation from his team regarding the impact of fact-checking integrity on public trust. Markus Reiter-Haas from Graz University of Technology and Beate Klosh from the University of Graz, Austria, discussed polarisation in public opinion across different topics of misinformation. Lastly, Alicia Bargar and Amruta Deshpande explored the issue of affordances across different platforms and how this corresponds to different types of vulnerability to misinformation.

The second part of the workshop included a hands-on activity allowing for deeper discussions. A scenario was presented to the participants according to which citizens, journalists and policymakers needed support to distinguish fact from fiction in the context of COVID-19 “infodemic”. Following, they were invited to reflect on the existing best tools and identify their limits. The discussion showed that participants generally referred to two types of tools. Tools that assist users assessing information trustworthiness based on specific characteristics, or that direct them to trustworthy sources, or that provide information cascade (mainly image or film) were brought forward. At the same time, the benefits of tools that enable social media users to think before they share encouraging them to critically engage with information were discussed. The limits of these tools focused on the automation technologies used. Furthermore, it was noted that such tools can still be complex for the average social media users and demand a level of digital literacy.

The last part of the workshop was dedicated to synergies and collaborations among the participants. Potential research project ideas were discussed. Participants also welcomed the invitation to contribute to the EUNOMIA’s edited volume. The book will focus on issues around human and societal factors of misinformation and approaches and limitations of sociotechnical solutions.