The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling

Thomas Ploug*

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

6 Citationer (Scopus)
43 Downloads (Pure)

Abstract

Social media data hold considerable potential for predicting health-related conditions. Recent studies suggest that machine-learning models may accurately predict depression and other mental health-related conditions based on Instagram photos and Tweets. In this article, it is argued that individuals should have a sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. The article (1) develops three basic arguments for a right to protection of personal data trading on the notions of social control and stigmatization, (2) argues that a number of features of AI profiling make individuals more exposed to social control and stigmatization than other types of data processing (the exceptionalism of AI profiling), (3) considers a series of other reasons for and against protecting individuals against AI profiling based on publicly available data, and finally (4) argues that the EU General Data Protection Regulation does not ensure that individuals have a right not to be AI profiled based on publicly available data.

OriginalsprogEngelsk
Artikelnummer14
TidsskriftPhilosophy and Technology
Vol/bind36
Udgave nummer1
ISSN2210-5433
DOI
StatusUdgivet - mar. 2023

Bibliografisk note

Publisher Copyright:
© 2023, The Author(s).

Fingeraftryk

Dyk ned i forskningsemnerne om 'The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling'. Sammen danner de et unikt fingeraftryk.

Citationsformater