The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling

Thomas Ploug*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

6 Citations (Scopus)
52 Downloads (Pure)

Abstract

Social media data hold considerable potential for predicting health-related conditions. Recent studies suggest that machine-learning models may accurately predict depression and other mental health-related conditions based on Instagram photos and Tweets. In this article, it is argued that individuals should have a sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. The article (1) develops three basic arguments for a right to protection of personal data trading on the notions of social control and stigmatization, (2) argues that a number of features of AI profiling make individuals more exposed to social control and stigmatization than other types of data processing (the exceptionalism of AI profiling), (3) considers a series of other reasons for and against protecting individuals against AI profiling based on publicly available data, and finally (4) argues that the EU General Data Protection Regulation does not ensure that individuals have a right not to be AI profiled based on publicly available data.

Original languageEnglish
Article number14
JournalPhilosophy and Technology
Volume36
Issue number1
ISSN2210-5433
DOIs
Publication statusPublished - Mar 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s).

Keywords

  • Artificial intelligence
  • Privacy
  • Right not to be profiled
  • Social control
  • Stigmatization

Fingerprint

Dive into the research topics of 'The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling'. Together they form a unique fingerprint.

Cite this