Publication

Annotating Credibility: Identifying and Mitigating Bias in Credibility Datasets

In the current post-truth era, online information is consistently under scrutiny with respect to its credibility (its quality and veracity). Computer science has been prolific in developing automated solutions for relevant tasks such as claim verification or bias estimation. However , the validity of such solutions relies heavily on their training and evaluation datasets. Inevitably, systematic and methodological errors (known as data biases) might appear during their compilation. We survey 12 published and freely-available datasets and annotate them for data biases using an established theoretical framework. We employ three expert annotators coming from the disciplines of computer science, philosophy, communication science and show that indeed, all annotated datasets suffer from biases.

Dimitrios Bountouridis et al.

pdf | 241.098 kB

Recent Articles

Learn More about our Media & Events
Read More
  • Blog Item

    29% of Dutch households already use voice assistants, are you prepared?

    READ POST
  • Blog Item

    You have watched data storytelling this week and you don’t even know it

    READ POST
  • Blog Item

    The added value of utilizing external unstructured data and using it for AI4GOOD

    READ POST
Check
Thanks for reaching out. We'll be in touch.

Check
Successfully subscribed to the VIQTOR DAVIS newsletter