Abstract
Studies suggest that machine learning models may accurately predict depression and other mental health-related conditions based on social media data. I have recently argued that individuals should have sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. In a comment, Holm claims that there are scenarios in which individuals have a reason to prefer attempts of social control exercised on the basis of accurate AI predictions and that the suggested right burdens individuals unfairly by allowing for those individuals to consent to AI profiling. In this reply, I address both of these alleged problems and their underlying assumptions and show why they fail to provide any reasons not to introduce the suggested right.
Original language | English |
---|---|
Article number | 49 |
Journal | Philosophy and Technology |
Volume | 36 |
Issue number | 3 |
ISSN | 2210-5433 |
DOIs |
|
Publication status | Published - Sept 2023 |
Bibliographical note
Publisher Copyright:© 2023, The Author(s).
Keywords
- Artificial intelligence
- Privacy
- Public data
- Right not to be profiled
- Social control
- Stigmatization