The Thick Machine: Anthropological AI Between Explanation and Explication

Anders Kristian Munk*, Asger Gehrt Knudsen, Mathieu Jacomy

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

15 Citations (Scopus)
246 Downloads (Pure)


According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a neural network to predict the emoji reaction associated with a comment and asked a group of human players to compete against the machine. We show that a) the machine can reach the same (poor) accuracy as the players (51%), b) it fails in roughly the same ways as the players, and c) easily predictable emoji reactions tend to reflect unambiguous situations where interpretation is easy. We therefore repurpose the failures of the neural network to point us to deeper and more ambiguous situations where interpretation is hard and explication becomes both necessary and interesting. We use this experiment as a point of departure for discussing how experiences from anthropology, and in particular the tension between formalist ethnoscience and interpretive thick description, might contribute to debates about explainable AI.

Original languageEnglish
JournalBig Data & Society
Issue number1
Pages (from-to)1-14
Number of pages14
Publication statusPublished - 2022


  • Computational anthropology
  • Explainable AI
  • Machine learning
  • Thick description
  • Ethnoscience


Dive into the research topics of 'The Thick Machine: Anthropological AI Between Explanation and Explication'. Together they form a unique fingerprint.

Cite this