Abstract
INTRODUCTION: Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed.
METHODS: We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians.
RESULTS: ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers.
CONCLUSIONS: In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information.
FUNDING: None.
TRIAL REGISTRATION: Not relevant.
Originalsprog | Engelsk |
---|---|
Artikelnummer | A05240360 |
Tidsskrift | Danish Medical Journal |
Vol/bind | 72 |
Udgave nummer | 1 |
Antal sider | 6 |
ISSN | 1603-9629 |
DOI | |
Status | Udgivet - 11 dec. 2024 |