EUNOMIA’s blockchain engine acts as a trust machine

April 14th, 2021 by

Data integrity is a term that refers to the reliability and assurance (validity) of data. One of the main challenges of social media is how to create assurances of data validity, at the same time safeguarding that the data have not been tampered.

In EUNOMIA, all posts are annotated with a “magic” icon (see the image below). When this special icon appears on a post, it means that its blockchain technology guarantees that the integrity of the data e.g., number of trusts, the number of followers and other information related to the post have not been tampered by any administrator or cyber attacker. However, it does not validate whether the post is trustworthy or not. This is for you to decide!

But why is data integrity so important? In cyberspace, online data are likely to be exposed to data manipulations by malicious actors that could use this to their advantage. Especially in social media such actors are distributing compromised pieces of data to their advantage, either to make others act on such data by spreading misinformation (e.g., propagate rumors) or by leveraging on the credibility of other people to claim authorship on certain pieces of digital work (either in plain text or in some visual form).

From the other hand, the actual content creators of such digital artifacts would like to guarantee data integrity towards their online communities (i.e., consumers) of such data. Lastly, and most importantly, they would like to be ensured that there are safeguards in place preventing such manipulation events from happening by other online users.

But how can one certify and prove that the data being presented are free from any data manipulation? EUNOMIA is using a trust engine which builds on a blockchain backbone to provide integrity assurance of the information stored. In addition, the blockchain layer of EUNOMIA creates audit trails of the information to log all the state changes or revisions made to the data.

But isn’t this the same as logging data in some database? Audit trails stored in the same way as the application data are equally vulnerable to tampering and manipulation attacks. There is no mechanism to make hard assertions about data integrity or validate the integrity of such data. In EUNOMIA, we are using a blockchain backbone to make such assertions that can validate the integrity of arbitrary data, and provide for traceability.

But how is this ensured? Blockchain technology builds on a peer-to-peer network where clients (aka nodes) maintain replicas of a distributed data structure (i.e., the blockchain). This data structure organizes pieces of information in the so-called blocks that are linked together with the use of a cryptographic hash function. In brief, the cryptographic hash function is used to create hashes of the information from each block that are linked with each other; since the hash of a block encapsulates the hash of the previous block as a pointer, thus forming a reversed linked list (aka the chain).

This unique characteristic of this data structure is critical for ensuring data integrity. If an attacker attempts to change any piece of information within a past block then the hash of the block changes, resulting in an unresolved pointer breaking the chain. This unreferenced block will not be accepted by the rest of the network.

In addition to cryptographic hashes that build a unique data structure, a blockchain network employs a safeguard mechanism (so-called consensus) that allows independent nodes to, coordinate, approve, and agree on which data should be appended on the data structure. Since the consensus algorithm operates under a distributed environment this ensures that there is no single point of failure, and no central control of the information that could be compromised.

EUNOMIA’s blockchain backbone leverages on the unique characteristics of: blockchain-based hash validation, and consensus offered by the technology to safeguard data attestations and guarantee the integrity of the data.

Exploring political bias in false information spread on social media

February 24th, 2021 by

Social media has become a key source of online news consumption. However, at the same time, social media users are not passive news consumers. They can further distribute online information to their networks and beyond. It is easy then to understand how information that is not always factual or information that promotes hate and violence can be amplified on social media.

This use of social media has an unimaginable impact on the real world as many recent events have shown. One such example is that of the Capitol deadly attack in January 2021, in which social media had a major role to play. Misinformation regarding the transparency of the election process had been spread through social media generating distrust and anger against the newly elected president. Riots are also said to be organised through groups on social media.

It is evident, that social media and misinformation can be harmful and from a political perspective it can threaten democracy. Through our work in the EUNOMIA project, we adopted an interdisciplinary approach to examine political bias in the engagement with false information.

EUNOMIA, a 3-year EU-funded Innovation project, aims to shift the culture in which we use social media focusing on trust, nudging social media users to prioritise critical engagement with online information before they react to it. To this end, it provides a toolkit that supports social media users to assess information trustworthiness. In developing effective solutions, it is necessary to understand the human and societal factors of misinformation.

Our interdisciplinary approach

Within the project, our interdisciplinary team at Trilateral Research leads the work of understanding the social and political considerations in the verification of social media misinformation, and the findings directly feed into further development of the tools. Our approach involved three key stages:

  • Stage 1 – TRI’s social scientists undertook desk-based research to understand the political challenges associated with verifying social media information. This provided insights on how political affinity can influence engagement with misinformation
  • Stage 2 – 19 interviews were conducted with citizens, traditional media journalists, and social media journalists by social scientists. The interviews highlighted how the language used on social media can indicate political bias. Furthermore, information and sources which are politically biased or radicalised are not perceived to be trustworthy.
  • Stage 3 – Building on the findings from stages 1 and 2, Trilateral’s technical team undertook a social network analysis to gain insights on the role of political bias in the engagement with misinformation on social media.

Stage 3 involved the team examining a network of 579 influential Twitter accounts of UK Members of Parliament and a sample of 49 false information accounts. Using UK politics as a case study, enabled the technical team to contribute to the existing heavily US-focused research.

The analysis was conducted using a step-by-step approach.

Within the UK context, the findings suggest that most of the accounts engaging with false information have a Conservative leaning. This can be explained in two ways:

  • False information can be generated and spread mainly by Conservative-leaning accounts, or
  • There is bias in the way fact-checkers label the false information accounts.

The insights emerging from Trilateral’s interdisciplinary approach can be used in the design and development process of relevant tools for tackling misinformation. It also invites fact-checkers and data scientists to explore potential bias when they label accounts as sources of false information. Furthermore, it contributes to media literacy, raising awareness of social media users regarding trustworthiness assessment and further engagement with online information.

The findings encourage social media users to examine the characteristics of accounts that generate and promote content especially with regard to political bias.

Disclaimer: This post was first published on Trilateral Research website

Klitos Christodoulou EUNOMIA’s partner from UNIC in an interview with Blasting News

January 30th, 2021 by

Klitos Christodoulou, assistant professor at the University of Nicosia, in his interview with Blasting Talks illustrates how EUNOMIA plans to repurpose the idea of a social media platform. Klitos presents the differences between the mainstream social media, like Facebook or Twitter, and blockchain-based social media to explain EUNOMIA’s potential in changing the culture of social networks.

Read the full article here

EUNOMIA’s project coordinator Prof. George Loukas on Blasting Talks

January 10th, 2021 by

Prof. George Loukas, EUNOMIA project coordinator and Head of Internet of Things and Security Research Group, featured on Blasting Talks. He talked about EUNOMIA and the project’s approach in tackling misinformation placing the user in the centre of toolkit’s design and development.

Read the full interview here

EUNOMIA at the Industry Forum of GlobeCom 2020

December 22nd, 2020 by

In the era of COVID-19 pandemic, social media have become a dominant, direct and highly effective form of news generation and sharing at a global scale. This information is not always trustworthy as exemplified by the wide spread of misinformation that proved dangerous for public health. Prof. Charalampos Patrikakis from the University of West Attica -partner of EUNOMIA project- co-organised an event focusing on “Fighting Misinformation on Social Networks” at the Industry Forum session of the Global Communications Conference 2020. GlobeCom2020 is one of the IEEE Communications Society’s two flagship conferences dedicated to driving innovation in nearly every aspect of communications.

The event included presentations by academics and industry representatives followed by an open discussion. Prof. Patrikakis delivered a presentation on “EUNOMIA project: a decentralized approach to fighting fake news”. His presentation referred to the concept of EUNOMIA on the adaptation of information hygiene routines for protection against the ‘infodemic’ of rapidly spreading misinformation. Moreover, EUNOMIA presentation included a more extensive graphic description of the project’s toolkit with its four interrelated functional components: The information cascade, Human-as-Trust-Sensor interface, Sentiment and subjectivity analysis and the Trustworthiness scoring. Participants were also invited to register on EUNOMIA in order to see how this works in real-time.

Pinelopi Troullinou EUNOMIA’s partner from Trilateral Research on Blasting Talks

December 20th, 2020 by

Pinelopi Troullinou, Research Analyst at Trilateral Research, in an interview with Blasting Talks, explains the importance of end-users in the project. Through co-design methods, they provide their needs and preferences feeding into the development of EUNOMIA toolkit. Furthermore, Pinelopi explains that the project adopts a Privacy, Ethical and Social Impact Assessment (PIA+) making sure that it respects ethical and societal values. EUNOMIA aims to shift the social media culture of “like” to “trust” triggering users to reflect when engaging with information online. In this context, EUNOMIA provides the tools to support social media users to adopt an “information hygiene routine” protecting themselves and their network against misinformation.

Read the full interview here

EUNOMIA’s partner Sorin Adam Matei from SIMAVI in an interview with Blasting Talks

November 29th, 2020 by

Sorin Adam Matei EUNOMIA’s partner representing SIMAVI and professor at Purdue University featured at Blasting Talks. He highlighted the project’s approach encouraging social media users to reflect on their engagement with information online. EUNOMIA does not dictate which information to be trusted or not. Instead, we encourage users to deliberate on online information providing tools to assist this process.

You cab read the full article here

EUNOMIA at SOCINFO2020: Challenging Misinformation; Exploring Limits and Approaches

October 30th, 2020 by

EUNOMIA project joined forces with H2020 project Co-Inform delivering together the workshop “Challenging Misinformation: Exploring Limits and Approaches” at the Social Informatics Conference 2020 (SocInfo2020) on 6th October 2020.

Pinelopi Troullinou (Trilateral Research) and Diotima Bertel (SYNYO) from EUNOMIA project invited researchers and practitioners to reflect on the existing approaches and the limitations of current socio-technical solutions to tackle misinformation. The objective of the workshop was to bring together stakeholders from diverse backgrounds to develop collaborations and synergies towards the common goal of social media users’ empowerment.

Four papers were presented at the workshop; Gautam Kishore Shahi from the University of Duisburg-Essen in Germany discussed the different conspiracy theories related to COVID-19 spread in the web and the challenges of their correction. Furthermore, he delivered a second presentation from his team regarding the impact of fact-checking integrity on public trust. Markus Reiter-Haas from Graz University of Technology and Beate Klosh from the University of Graz, Austria, discussed polarisation in public opinion across different topics of misinformation. Lastly, Alicia Bargar and Amruta Deshpande explored the issue of affordances across different platforms and how this corresponds to different types of vulnerability to misinformation.

The second part of the workshop included a hands-on activity allowing for deeper discussions. A scenario was presented to the participants according to which citizens, journalists and policymakers needed support to distinguish fact from fiction in the context of COVID-19 “infodemic”. Following, they were invited to reflect on the existing best tools and identify their limits. The discussion showed that participants generally referred to two types of tools. Tools that assist users assessing information trustworthiness based on specific characteristics, or that direct them to trustworthy sources, or that provide information cascade (mainly image or film) were brought forward. At the same time, the benefits of tools that enable social media users to think before they share encouraging them to critically engage with information were discussed. The limits of these tools focused on the automation technologies used. Furthermore, it was noted that such tools can still be complex for the average social media users and demand a level of digital literacy.

The last part of the workshop was dedicated to synergies and collaborations among the participants. Potential research project ideas were discussed. Participants also welcomed the invitation to contribute to the EUNOMIA’s edited volume. The book will focus on issues around human and societal factors of misinformation and approaches and limitations of sociotechnical solutions.

Interested in finding out how good you are at telling misinformation in areas you may or may not know much about? Take part in EUNOMIA’s first pilot!

October 6th, 2020 by

Are you confident that you can always determine the trustworthiness of what you read on social media? What if you don’t know much about a topic? Can you still do it? Sign up to the decentralized chapters EUNOMIA social media platform (powered by Mastodon) and try to find out the 10 trustworthy and 10 untrustworthy posts that our mischievous researchers will inject (based on the researchers’ scientific expertise) in a discussion of decentralized technologies between 5th and 14th October. For this very specific period of time and only, your selections will be recorded centrally by us so that we determine who has got the most of these 20 correct by the end.

How to join our competition in two simple steps:

  1. Make an account here
  2. Log in
  3. Click on “Local” and wait a moment for the “I trust this”, “I don’t trust this” buttons to appear (might take a few moments because it’s still early version)
  4. Decide whether you “trust” or you “don’t trust” the posts clicking the respective icon. 

You win points for correctly marking posts as trustworthy or untrustworthy, and lose if you get them wrong. For every correct selection, you will get +1 point and for every incorrect -1 point. For the top scorers, which will be announced the week after, the University of Nicosia has prepared a range of very attractive prizes.

In case of any tie, the winner will be the one who had the lowest average response time in their correct answers. The response time is the time difference from when one of the 20 posts was made by the researchers to the time that the participant correctly selected “I trust this” or “I don’t trust this”.

Some advice you may want to use, not only for this competition, but more widely in social media:

  • Be wary of popular posts. Misinformation travels a lot faster than reliable information
  • Be cautious of information forwarded to you through your network
  • Refrain from sharing based only on headline
  • Be wary of resharing information solely for its high novelty. Misinformation tends to be more novel.
  • Be wary of language that is making you feel emotional. It is designed to become viral, not to inform.
  • Be mindful of your emotions when reading a post. Anger makes you susceptible to partisanship.

Note that this is an early experimental version of EUNOMIA. It will be slow and the “I trust this” buttons and other EUNOMIA functionality may not appear immediately. Bare with it please for a few moments 🙂

You can find further info on UNIC’s programs here 

The Decentralized site can be found here  

Empowering the social media user to assess information trustworthiness; Image similarity detection

September 7th, 2020 by

With the overarching objective to assist users in determining the trustworthiness of information in social media using an intermediary-free approach,  EUNOMIA employs a decentralised architecture Mastodon instance and implements AI technology to generate information cascade of the posts to facilitate the discovery and visualisation of the source of information, how information is shared and changed over time to provide users with provenance information when they are determining a post’s trustworthiness. The information cascade is generated not only based on the text content of a post via paraphrase identification using natural language processing (NLP) technique, but also the image content of the post via image verification using computer vision technique. 

Image verification algorithm is implemented with the aim to determine whether a given pair of images are similar or not in terms of images similarity. The advancements in image verification field is in two broad areas: image embedding and metric learning based. In image embedding, a robust and discriminative descriptor is learnt to represent each image as a compact feature vector/embedding. EUNOMIA employs current state-of-the-art feature descriptors generated by existing convolutional neural network (CNN) which learns features on its own. In metric based learning, a distance metric is utilised to learn from CNN-embeddings in an embedding space to effectively measure the similarity of images. Identical images obtain 100% in similarity; similar images gain higher similarity score; different images and some of the adversarial images would have lower similarity score as shown below.

With the implementation of image similarity functionality, EUNOMIA generates the information cascade by considering both text and image information of a social media post. EUNOMIA platform also has the potential to involve in fetching similar look images give a reference image to a EUNOMIA user.