An increasing number of intelligent data-driven health systems seek to support patients and clinicians in decision making tasks. However, the recommendations provided by such systems can negatively impact the reasoning abilities of its users, giving rise to cognitive biases. Such mental processes can subsequently harm the quality of the user's decision. While decision support systems are typically designed to increase user efficiency, known approaches to mitigate such biases primarily rely on slowing down the decision making process---offsetting any efficiency benefits. This position paper calls attention to the efficiency--quality trade-off in bias mitigation and outlines a future research direction for bias mitigation in AI decision support.
|2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 23
|23/04/2023 → 28/04/2023
|Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23)