Untested AI-based tools could harm patients, WHO warns
As excitement builds throughout health and information systems worldwide over the rich potential benefits of new tools generated by artificial intelligence (AI), the UN health agency on Tuesday called for action to ensure that patients are properly protected.
As excitement builds throughout health and information systems worldwide over the rich potential benefits of new tools generated by artificial intelligence (AI), the UN health agency on Tuesday called for action to ensure that patients are properly protected.
Cautionary measures normally applied to any new technology are not being exercised consistently with regard to large language model (LLM) tools, which use AI for crunching data, creating content, and answering questions, the World Health Organization (WHO) warned.
“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world,” the agency said.
As such, the agency proposed that these concerns are addressed and clear evidence of benefits are measured before their widespread use in routine health care and medicine.
Avoiding health-related errors
While enthusiastic about the appropriate use of technologies to support healthcare professionals, patients, researchers, and scientists, WHO said these new AI-based tools require vigilance, especially in light of such rapidly expanding platforms such as ChatGPT, Bard, BERT, and many others that imitate understanding, processing, and producing human communication.
For instance, these new tools can generate answers that may appear authoritative and plausible to an end user. The danger is that these responses may be completely incorrect or contain serious errors, especially concerning for any health issues, WHO said.
They can also be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
Safely harnessing AI
The risks must be examined carefully when using these new tools to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity, WHO said.
Committed to harnessing new technologies to improve human health, WHO recommends that policymakers ensure patient safety and protection as technology firms work to commercialize LLM tools.
The agency reiterated the importance of applying ethical principles and appropriate governance. In this vein, the UN health agency in 2021, published Ethics and Governance of Artificial Intelligence for Health ahead of the adoption of the first global agreement on the ethics of AI.
© UN News (2023) — All Rights Reserved. Original source: UN News
Where next?
Browse related news topics:
Read the latest news stories:
- My Niece Was Killed Amid Mexico’s Land Conflicts. The World Must Hold Corporations Accountable Thursday, December 18, 2025
- When Frontline Communities Lead: Lessons From Five Years of Just Climate Action Wednesday, December 17, 2025
- How Pacific Wisdom Is Shaping Global Climate Action Wednesday, December 17, 2025
- Killer Robots: The Terrifying Rise of Algorithmic Warfare Wednesday, December 17, 2025
- Asia and the Pacific Preparing for a New Era of Disaster Risks Wednesday, December 17, 2025
- Crimean Tatar artist moulds new path through clay in wartime Ukraine Wednesday, December 17, 2025
- At UN, nations pledge people-first digital future, tighter AI safeguards Wednesday, December 17, 2025
- Aid agencies warn Gaza response at breaking point as Israel urged to lift new restrictions Wednesday, December 17, 2025
- Fifty days on, Jamaica struggles to rebuild after Hurricane Melissa’s unprecedented destruction Wednesday, December 17, 2025
- World News in Brief: Progress on hunger in Asia-Pacific, key Gaza pipeline repaired, flu hits Europe hard Wednesday, December 17, 2025
Learn more about the related issues: