Combining Human and Machine Confidence in Truthfulness Assessment

Share

Abstract

#machine learning #algorithms #disinformation

Automatically detecting online misinformation at scale is a challenging and interdisciplinary problem. Deciding what is to be considered truthful information is sometimes controversial and also difficult for educated experts. As the scale of the problem increases, human-in-the-loop approaches to truthfulness that combine both the scalability of machine learning (ML) and the accuracy of human contributions have been considered.

In this work, we look at the potential to automatically combine machine-based systems with human-based systems. The former exploit superviseds ML approaches; the latter involve either crowd workers (i.e., human non-experts) or human experts. Since both ML and crowdsourcing approaches can produce a score indicating the level of confidence on their truthfulness judgments (either algorithmic or self-reported, respectively), we address the question of whether it is feasible to make use of such confidence scores to effectively and efficiently combine three approaches: (i) machine-based methods, (ii) crowd workers, and (iii) human experts. The three approaches differ significantly, as they range from available, cheap, fast, scalable, but less accurate to scarce, expensive, slow, not scalable, but highly accurate.

Type of Publication
articles-and-papers
Theme
artificial-intelligence
Publishing
Journal of Data and Information Quality
Publishing Date
2022
Language
english
Status
other
DOI
10.1145/3546916
Authors
Yunke Qu; Kevin Roitero; David La Barbera; Damiano Spina; Stefano Mizzaro; Gianluca Demartini