Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets

Lee Friedman*, Vladyslav Prokopenko, Shagen Djanian, Dmytro Katrychuk, Oleg V Komogortsev

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

3 Citations (Scopus)

Abstract

Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.

Original languageEnglish
JournalBehavior Research Methods
Volume55
Issue number1
Pages (from-to)417-427
Number of pages11
ISSN1554-351X
DOIs
Publication statusPublished - Jan 2023

Bibliographical note

© 2022. The Psychonomic Society, Inc.

Keywords

  • Cohen’s Kappa
  • Event-level agreement
  • Eye-movements
  • F1-score
  • Manual classification
  • Sample-level agreement

Fingerprint

Dive into the research topics of 'Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets'. Together they form a unique fingerprint.

Cite this