Studies have found medical AI tools tend to downplay the symptoms of women and certain minorities. Photograph: iStock

Artificial intelligence tools used by doctors risk leading to worse health outcomes for women and ethnic minorities, as a growing body of research shows that many large language models (LLMs) downplay the symptoms of these patients.

A series of recent studies have found that the uptake of AI models across the healthcare sector could lead to biased medical decisions, reinforcing patterns of under-treatment that already exist across different groups in western societies.

The findings by researchers at leading US and British universities suggest that medical AI tools, powered by LLMs, have a tendency to not reflect the severity of symptoms among woman patients, while also displaying less “empathy” towards Black and Asian ones.

The warnings come as the world’s top AI groups such as Microsoft, Amazon, OpenAI and Google rush to develop products that aim to reduce physicians’ workloads and speed up treatment, all in an effort to help overstretched health syste

📰

Continue Reading on The Irish Times

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article →