People Should Have a Right Not to Be Subjected to AI Profiling Based on Publicly Available Data! A Reply to Holm

Thomas Ploug*

*Corresponding author for this work

Research output: Contribution to journalComment/debateResearchpeer-review

1 Citation (Scopus)
20 Downloads (Pure)

Abstract

Studies suggest that machine learning models may accurately predict depression and other mental health-related conditions based on social media data. I have recently argued that individuals should have sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. In a comment, Holm claims that there are scenarios in which individuals have a reason to prefer attempts of social control exercised on the basis of accurate AI predictions and that the suggested right burdens individuals unfairly by allowing for those individuals to consent to AI profiling. In this reply, I address both of these alleged problems and their underlying assumptions and show why they fail to provide any reasons not to introduce the suggested right.

Original languageEnglish
Article number49
JournalPhilosophy and Technology
Volume36
Issue number3
ISSN2210-5433
DOIs
Publication statusPublished - Sept 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s).

Keywords

  • Artificial intelligence
  • Privacy
  • Public data
  • Right not to be profiled
  • Social control
  • Stigmatization

Fingerprint

Dive into the research topics of 'People Should Have a Right Not to Be Subjected to AI Profiling Based on Publicly Available Data! A Reply to Holm'. Together they form a unique fingerprint.

Cite this