Qualitative analysis of manual annotations of clinical text with SNOMED CT

Jose Antonio Miñarro-Giménez, Catalina Martínez-Costa, Daniel Karlsson, Stefan Schulz, Kirstine Rosenbeck Gøeg

Research output: Contribution to journalJournal articleResearchpeer-review

11 Citations (Scopus)

Abstract

SNOMED CT provides about 300,000 codes with fine-grained concept definitions to support interoperability of health data. Coding clinical texts with medical terminologies it is not a trivial task and is prone to disagreements between coders. We conducted a qualitative analysis to identify sources of disagreements on an annotation experiment which used a subset of SNOMED CT with some restrictions. A corpus of 20 English clinical text fragments from diverse origins and languages was annotated independently by two domain medically trained annotators following a specific annotation guideline. By following this guideline, the annotators had to assign sets of SNOMED CT codes to noun phrases, together with concept and term coverage ratings. Then, the annotations were manually examined against a reference standard to determine sources of disagreements. Five categories were identified. In our results, the most frequent cause of inter-annotator disagreement was related to human issues. In several cases disagreements revealed gaps in the annotation guidelines and lack of training of annotators. The reminder issues can be influenced by some SNOMED CT features.

Original languageEnglish
Article numbere0209547
JournalPLOS ONE
Volume13
Issue number12
Number of pages15
ISSN1932-6203
DOIs
Publication statusPublished - 1 Dec 2018

Fingerprint

Dive into the research topics of 'Qualitative analysis of manual annotations of clinical text with SNOMED CT'. Together they form a unique fingerprint.

Cite this