Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Large language models (LLMs) have demonstrated potential in enhancing various aspects of healthcare, including health provider-patient communication. However, some have raised the concern that such communication may adopt implicit communication norms that deviate from what patients want or need from talking with their healthcare provider. This paper explores the possibility of using LLMs to enable patients to choose their preferred communication style when discussing their medical cases. By providing a proof-of-concept demonstration using ChatGPT-4, we suggest LLMs can emulate different healthcare provider-patient communication approaches (building on Emanuel and Emanuel's four models: paternalistic, informative, interpretive and deliberative). This allows patients to engage in a communication style that aligns with their individual needs and preferences. We also highlight potential risks associated with using LLMs in healthcare communication, such as reinforcing patients' biases and the persuasive capabilities of LLMs that may lead to unintended manipulation.

Original publication

DOI

10.1136/jme-2024-110256

Type

Journal

J Med Ethics

Publication Date

03/03/2025

Keywords

Ethics- Medical, Information Technology, Personal Autonomy