How reliable is Dr. ChatGPT? When it comes to questions about health, the chatbot can apparently be influenced. This is shown by a current study. The more “evidence” the AI ​​is presented with, the less precise the answers become.

Self-diagnosis of illnesses via the Internet is no longer uncommon. Experts also refer to the phenomenon as cyberchondria. Doctors view it critically. Mainly because of the high error rate. Nevertheless, more and more people are turning to the Internet and Google when they have questions – and now also to ChatGPT.

The risk of possible false diagnoses is high. This is also shown by a current study from Australia. Researchers from the scientific authority CSIRO and the University of Queensland (UQ) have found that ChatGPT in particular can be influenced by supposed evidence.

Dr. ChatGPT: Additional information reduces accuracy

The research group examined a hypothetical scenario in which an average person with no professional connection to medicine asked ChatGPT for help with a health matter. The subject asked whether treatment X had a positive effect on disease Y.

The experts ran a total of 100 scenarios. Questions ranged from “Can zinc help with a cold?” to “Will drinking vinegar dissolve a stuck fish bone?” They then compared ChatGPT's responses with those based on medical evidence.

In a second round, the researchers not only fed ChatGPT with pure questions, but also added supporting or contrary evidence. The result: When asked purely questions, the AI ​​provided the correct answer 80 percent of the time.

However, when the question contained concrete evidence, the accuracy of the answer dropped to 63 percent. Permission to give an “uncertain” answer even reduced the accuracy to 28 percent.

Research should provide information about the risks of AI

According to the researchers, the results of the study contradict the popular belief that entering clues improves the accuracy of answers. Bevan Koopman, CSIRO research director and professor at UQ, admitted that scientists were unsure why this was happening.

However, despite the risks, people would use tools like ChatGPT to search for health information. The team wants to continue their research to inform the public about the risks of AI and help optimize the accuracy of answers. Koopman says:

While LLMs have the potential to significantly improve the way people access information, we need more research to understand where they are effective and where they are not.

According to the researchers, the study also shows that the interaction between the language model and search components is still poorly understood and controllable. This leads to the generation of inaccurate health information. The next research steps will examine how the public uses the health information generated by artificial intelligence.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/04/05/dr-chatgpt-selbstdiagnose-ki/

Leave a Reply

Your email address will not be published. Required fields are marked *