The pathway to trustworthiness assessment; Sentiment Analysis identification

December 14th, 2020 by

As the amount of content online grows exponentially, new networks and interactions are also growing tremendously fast. EUNOMIA user’s trustworthiness indicators provide a boost towards a fair and balanced social network interaction.

Sentiment analysis is one of EUNOMIA’s trustworthiness indicators assisting users to assess the trustworthiness of online information. It relies on the automatic identification of the sentiment expressed in a user post (negative, positive, or neutral). A sentiment analysis algorithm employs principles from the scientific fields of machine learning and natural language processing. Current trends in the field include AI techniques that outperform traditional dictionary-based approaches and provide unparalleled performance.

Dictionary-based techniques work as follows:  A list of opinion words such as adjectives (i.e. excellent, love, supports, expensive, terrible, hate, complicated), nouns, verbs and word phrases constitute the prior knowledge for extracting the sentiment polarity of a piece of text. For example, in “I love playing basketball” a dictionary-based method would identify and consider the word “love” to infer the positive polarity of the expression.

Figure 1. Sentiment Analysis of user opinions

Unfortunately, these methods are unable to grasp long-range sentiment dependencies, sentiment fluctuations or opinion modifiers (i.e. not so much expensive, less terrible etc.) that exist in abundance in user-generated text.

Figure 2. Demo of how the core of the sentiment analysis component works in EUNOMIA.

We use two models that process user generated content in parallel. The first model relies on sentiment patterns to extract polarity. For example in “not so much expensive” the model would identify the relation between “not” and “expensive” and would assign positive polarity in  comparison to a dictionary-based method that would only rely on the negative word “expensive”.

The second model is an advanced machine learning model, that relies on a trained neural network and it can identify sentiment fluctuations of longer range. Therefore, the first model (pattern-based) relies on sentiment patterns to extract the sentiment orientation, while the second, relies on a neural network that is trained on labeled data and is capable of distinguishing between positive/neutral/negative text with high accuracy.

The output of both models is processed by an ensemble algorithm that decides on the final sentiment classification and the degree that the models are confident about their predictions.

The results of the sentiment analysis process provide one of EUNOMIA’s indicators. Sentiment and emotion in language is connected quite frequently with subjectivity and on many occasions with decietful information. EUNOMIA raises an alert and then the user, by consulting additional meta-information like EUNOMIA’s other indicators can investigate the content further and decide if it is valid and can be safely consumed or shared further to the community.

Pantelis Agathangelou, PhD Candidate, University of Nicosia

The featured photo is by Domingo Alvarez E on Unsplash

Implicit trustworthiness assessment based on users’ reactions claims

September 15th, 2020 by

Online textual information has been increased tremendously over the years, leading to the demand of information verification. As a result, Natural Language Processing (NLP) research on tasks such as stance detection (Derczynski et al., 2017) and fact verification (Thorne et al., 2018) is gaining momentum, as an attempt to automatically identify misinformation over the social networks (e.g., Mastodon and Twitter).

To that end, within the scope of EUNOMIA a stance classification model was trained, which involves identifying the attitude of EUNOMIA-consent Mastodon users towards the truthfulness of the rumour they are discussing. In particular, transfer learning was applied to fine tune the RoBERTa (Robustly optimized BERT) model (Liu et al., 2019) using the public available dataset SemEval 2019 Subtask 7A (Gorrell et al., 2019). This dataset contains Twitter threads and each tweet (e.g., Hostage-taker in supermarket siege killed, reports say. #ParisAttacks –LINK) in the tree-structured thread is categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to (e.g., I’ve heard that also).
  • Deny: the author of the response denies the veracity of the rumour they are responding to (e.g., That’s a lie).
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to (e.g., Really?).
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to (e.g., True tragedy).

Our model achieved 85.1% accuracy and 62.75 % F1-score macro. Due to the fact that this dataset includes posts using arbitrary ways of language (e.g., OMG that aint right ) the obtained scores are not spectacular, but even so, our approach surpasses the state-of-the-art results (i.e., 81.79% accuracy and 61.87% F1-score) for this dataset  (Yang et al., 2019).

The service has been containerized and will be soon integrated with the rest of the EUNOMIA platform as another useful trustworthiness indicator for the users.

References

Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. SemEval@ACL.

Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., & Zubiaga, A. (2019). SemEval-2019 Task 7: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of SemEval. ACL.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.

Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. ArXiv, abs/1803.05355.

Yang, R., Xie, W., Liu, C., & Yu, D. (2019). BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation. SemEval@NAACL-HLT.