- Home
- Medical news & Guidelines
- Anesthesiology
- Cardiology and CTVS
- Critical Care
- Dentistry
- Dermatology
- Diabetes and Endocrinology
- ENT
- Gastroenterology
- Medicine
- Nephrology
- Neurology
- Obstretics-Gynaecology
- Oncology
- Ophthalmology
- Orthopaedics
- Pediatrics-Neonatology
- Psychiatry
- Pulmonology
- Radiology
- Surgery
- Urology
- Laboratory Medicine
- Diet
- Nursing
- Paramedical
- Physiotherapy
- Health news
- AYUSH
- State News
- Andaman and Nicobar Islands
- Andhra Pradesh
- Arunachal Pradesh
- Assam
- Bihar
- Chandigarh
- Chattisgarh
- Dadra and Nagar Haveli
- Daman and Diu
- Delhi
- Goa
- Gujarat
- Haryana
- Himachal Pradesh
- Jammu & Kashmir
- Jharkhand
- Karnataka
- Kerala
- Ladakh
- Lakshadweep
- Madhya Pradesh
- Maharashtra
- Manipur
- Meghalaya
- Mizoram
- Nagaland
- Odisha
- Puducherry
- Punjab
- Rajasthan
- Sikkim
- Tamil Nadu
- Telangana
- Tripura
- Uttar Pradesh
- Uttrakhand
- West Bengal
- Medical Education
- Industry
ChatGPT matches doctors in suggesting likely diagnoses in the emergency medicine department
Overview
The artificial intelligence chatbot ChatGPT performed as well as a trained doctor in suggesting likely diagnoses for patients being assessed in emergency medicine departments, in a pilot study to be presented at the European Emergency Medicine Congress, which starts on Saturday.
Researchers say a lot more work is needed, but their findings suggest the technology could one day support doctors working in emergency medicine, potentially leading to shorter waiting times for patients.
The research, which is also published this month in the Annals of Emergency Medicine included anonymized details on 30 patients who were treated at Jeroen Bosch Hospital’s emergency department in 2022. The researchers entered physicians’ notes on patients’ signs, symptoms, and physical examinations into two versions of ChatGPT. They also provided the chatbot with the results of lab tests, such as blood and urine analysis. For each case, they compared the shortlist of likely diagnoses generated by the chatbot to the shortlist made by emergency medicine doctors and to the patient’s correct diagnosis.
They found a large overlap (around 60%) between the shortlists generated by ChatGPT and the doctors. Doctors had the correct diagnosis within their top five likely diagnoses in 87% of the cases, compared to 97% for ChatGPT version 3.5 and 87% for version 4.0.
Reference: ChatGPT and Generating a Differential Diagnosis Early in an Emergency Department Presentation, Annals of Emergency Medicine, DOI: 10.1016/j.annemergmed.2023.08.003
Speakers
Isra Zaman
B.Sc Life Sciences, M.Sc Biotechnology, B.Ed