The World Health Organization (WHO) is urging caution in the use of artificial intelligence (AI) generated large language model tools (LLMs) to safeguard human well-being, safety, autonomy, and public health.
LLMs, such as Bard, ChatGPT, and Bert, mimic human communication and are rapidly expanding platforms. While their potential to support healthcare needs is exciting, WHO emphasizes the need for careful examination of risks when using LLMs to improve access to health information, decision support, and diagnostic capacity in under-resourced settings. This will ensure that protection is being taken for people’s health and it will also reduce inequities.
While recognizing the value of technology, including LLMs, in supporting healthcare professionals, patients, researchers, and scientists, the WHO is concerned that the necessary caution typically exercised with new technologies is not consistently applied to LLMs. This lack of caution could lead to errors by healthcare workers, harm to patients, erosion of trust in AI, and hinder the long-term benefits of such technologies worldwide.
The concerns related to the use of large language model tools (LLMs) in healthcare include:
- Biased data used to train AI, leading to potentially misleading or inaccurate health information
- LLMs generate responses that may appear authoritative and plausible but can be completely incorrect or contain serious errors, especially in health-related contexts
- LLMs are being trained on data without proper consent, potentially compromising the protection of sensitive user data, including health data
- Misuse of LLMs to create and spread highly convincing disinformation, making it difficult for the public to distinguish reliable health content
- Need for ensuring patient safety and protection while technology firms work towards commercializing LLMs in healthcare
WHO suggests that these concerns should be addressed and concrete evidence of the benefits should be evaluated before the widespread adoption of LLMs in routine healthcare and medicine, whether it’s used by individuals, caregivers, or the administrators and policy-makers in the health system.
Furthermore, WHO emphasizes the significance of adhering to ethical principles and appropriate governance as outlined in their guidance on AI ethics and governance for health. The six principles according to WHO are:
- Safeguarding autonomy
- Ensuring transparency, explainability, and comprehensibility
- Ensuring inclusiveness and equity
- Fostering responsibility and accountability
- Promoting human well-being alongside the public interest
- Promoting AI that is responsive and sustainable.
It is essential to uphold these principles when designing, developing, and deploying AI in the healthcare sector.