Validating Untrained Human Annotations Using Extreme Learning Machines

Thomas Forss, Leonardo Espinosa-Leal*, Anton Akusok, Amaury Lendasse, Kaj-Mikael Björk

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

We present a process for validating and improving annotations made by untrained humans using a two-step machine learning algorithm. The initial validation algorithm is trained on a high quality annotated subset of the data that the untrained humans are asked to annotate. We continue by using the machine learning algorithm to predict other samples that are also annotated by the humans and test several approaches for joining the algorithmic annotations with the human annotations, with the intention of improving the performance beyond using either approach individually. We show that combining human annotations with the algorithmic predictions can improve the accuracy of the annotations.
Original languageEnglish
Title of host publicationProceedings of ELM2019
EditorsJiuwen Cao, Chi Man Vong, Yoan Miche, Amaury Lendasse
Place of PublicationCham
PublisherSpringer
Pages89-98
ISBN (Electronic)978-3-030-58989-9
ISBN (Print)978-3-030-58988-2, 978-3-030-59049-9
DOIs
Publication statusPublished - 12 Sept 2020
MoE publication typeA4 Article in a conference publication
Event2019 International Conference on Extreme Learning Machine (ELM 2019) - Yangzhou, China
Duration: 14 Dec 201916 Dec 2019

Publication series

SeriesProceedings in Adaptation, Learning and Optimization
Volume14
ISSN2363-6084

Conference

Conference2019 International Conference on Extreme Learning Machine (ELM 2019)
Country/TerritoryChina
CityYangzhou
Period14/12/1916/12/19

Fingerprint

Dive into the research topics of 'Validating Untrained Human Annotations Using Extreme Learning Machines'. Together they form a unique fingerprint.

Cite this