Experts say chatbots can provide more tailored responses than a standard Google search.
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. Photo: REUTERS/Dado Ruvic/Illustration/File Photo
“>
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. Photo: REUTERS/Dado Ruvic/Illustration/File Photo
As hundreds of millions of people turn to artificial intelligence chatbots for advice, tech companies are now rolling out tools designed specifically to answer health-related questions.
In January, OpenAI launched ChatGPT Health, a version of its chatbot that can review users’ medical records, wellness apps and data from wearable devices to respond to health queries. The service is currently available through a waiting list.
Rival company Anthropic offers similar features to some users of its Claude chatbot.
Both firms stress that their large language models are not a replacement for doctors and should not be used to diagnose illnesses. Instead, they say the tools can explain complex test results, help users prepare for medical appointments and identify health trends in records and app data.
Experts say chatbots can provide more tailored responses than a standard Google search, especially when users share detailed health information such as age, prescriptions and medical history.
“If used responsibly, these tools can offer useful information,” said Dr Robert Wachter of the University of California, San Francisco. However, he advised users to provide as much relevant detail as possible to improve accuracy.
Doctors warn that AI should never be used during medical emergencies. Symptoms like chest pain, shortness of breath or severe headache require immediate medical attention.
Even in non-urgent cases, experts recommend approaching AI-generated advice with caution. Dr Lloyd Minor, dean of Stanford’s medical school, said major health decisions should not rely solely on chatbot responses.
Privacy is another key concern. Health data shared with AI companies is not protected under the US federal health privacy law known as HIPAA, which applies to doctors and hospitals.
While OpenAI and Anthropic say health data is kept separate and not used to train their models, users must actively choose to share their information.
Early studies show mixed results. Research from Oxford University in 2024 found that people using AI chatbots did not make better health decisions than those using online searches.
Although chatbots correctly identified medical conditions in written scenarios 95% of the time, they often struggled during real-life interactions.
Experts suggest seeking a second AI opinion or consulting a medical professional for added confidence.
