ADVERTISEMENT

Large language models (LLMs) – the technology behind artificial intelligence (AI) chatbots like ChatGPT – can recall vast amounts of medical information. But new research suggests that their reasoning skills still remain inconsistent.

A study led by investigators in the United States found that popular LLMs are prone to sycophancy, or the tendency to be overly agreeable even when responding to illogical or unsafe prompts.

Published in the journal npj Digital Medicine, the study hi

πŸ“°

Continue Reading on Euronews

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article β†’