AI chatbots flagged as potentially dangerous sources of medical advice, study finds

AI chatbots often give medical advice that is inaccurate or inconsistent, which could put users at risk, according to a new study from the University of Oxford.

Researchers found that people using AI for health advice received a mix of helpful and harmful responses, making it difficult to know what information to trust.

The study involved 1,300 participants who were given health scenarios, such as severe headaches or ongoing exhaustion. Some participants used AI chatbots to help decide what might be wrong and whether they should seek medical care. Researchers then assessed whether users made the right decisions, such as seeing a GP or going to the emergency room.

Those who used AI often struggled to ask the right questions and received different answers depending on how their symptoms were described. Many found it hard to tell which information was useful.

Dr. Rebecca Payne, a lead researcher, warned that asking chatbots about symptoms could be “dangerous.” Senior author Dr. Adam Mahdi said while AI can provide medical information, people often struggle to get clear, practical advice from it.

Experts also raised concerns about AI repeating long-standing biases in healthcare. Still, others noted that newer, health-focused AI tools are being developed and could improve with stronger regulations and medical guidelines.