ERS Conference Highlights: ChatGPT Surpassed Trainee Doctors in Assessing Complex Respiratory Illness In Children

Published On 2024-09-10 03:15 GMT   |   Update On 2024-09-10 09:31 GMT
Advertisement

The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma, and chest infections in a study presented at the European Respiratory Society (ERS) Congress in Vienna, Austria.

The study also showed that Google’s chatbot Bard performed better than trainees in some aspects and Microsoft’s Bing chatbot performed as well as trainees. 

Advertisement

The research suggests that these large language models (LLMs) could be used to support trainee doctors, nurses, and general practitioners to triage patients more quickly and ease pressure on health services.

The study was presented by Dr Manjith Narayanan, a consultant in paediatric pulmonology at the Royal Hospital for Children and Young People, Edinburgh and honorary senior clinical lecturer at the University of Edinburgh, UK. He said: “Large language models, like ChatGPT, have come into prominence in the last year and a half with their ability to seemingly understand natural language and provide responses that can adequately simulate a human-like conversation. These tools have several potential applications in medicine. My motivation to carry out this research was to assess how well LLMs are able to assist clinicians in real life.”

To investigate this, Dr Narayanan used clinical scenarios that occur frequently in paediatric respiratory medicine. (23)The scenarios were provided by six other experts in paediatric respiratory medicine and covered topics like cystic fibrosis, asthma, sleep disordered breathing, breathlessness and chest infections. They were all scenarios where there is no obvious diagnosis, and where there is no published evidence, guidelines or expert consensus that point to a specific diagnosis or plan.

Ten trainee doctors who had less than four months of clinical experience in paediatrics were given an hour where they could use the internet, but not any chatbots, to solve each scenario with a descriptive answer of 200 to 400 words. Each scenario was also presented to the three chatbots.

 All the responses were scored by six paediatric respiratory experts for correctness, comprehensiveness, usefulness, plausibility, and coherence. They were also asked to say whether they thought each response was human- or chatbot-generated and to give each response an overall score out of nine. 

Solutions provided by ChatGPT version 3.5 scored an average of seven out of nine overall and were believed to be more human-like than responses from the other chatbots(25). Bard scored an average of six out of nine and was scored as more ‘coherent’ than trainee doctors, but in other respects was no better or worse than trainee doctors. Bing scored an average of four out of nine – the same as trainee doctors overall. Experts reliably identified Bing and Bard responses as non-human.

Dr Narayanan said: “Our study is the first, to our knowledge, to test LLMs against trainee doctors in situations that reflect real-life clinical practice. We did this by allowing the trainee doctors to have full access to resources available on the internet, as they would in real life.

This moves the focus away from testing memory, where there is a clear advantage for LLMs. Therefore, this study shows us another way we could be using LLMs and how close we are to regular day-to-day clinical application.

“We have not directly tested how LLMs would work in patient facing roles. However, it could be used by triage nurses, trainee doctors and primary care physicians, who are often the first to review a patient.”

Reference: "Clinical scenarios in paediatric pulmonology: Can large language models fare better than trainee doctors?", by Manjith Narayanan et al; Presented in session, "Respiratory care in the digital age: innovative applications and their evidence" at 09:30-10:45 CEST on Monday 9 September 2024.

Full View
Tags:    

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News