- Home
- Medical news & Guidelines
- Anesthesiology
- Cardiology and CTVS
- Critical Care
- Dentistry
- Dermatology
- Diabetes and Endocrinology
- ENT
- Gastroenterology
- Medicine
- Nephrology
- Neurology
- Obstretics-Gynaecology
- Oncology
- Ophthalmology
- Orthopaedics
- Pediatrics-Neonatology
- Psychiatry
- Pulmonology
- Radiology
- Surgery
- Urology
- Laboratory Medicine
- Diet
- Nursing
- Paramedical
- Physiotherapy
- Health news
- Fact Check
- Bone Health Fact Check
- Brain Health Fact Check
- Cancer Related Fact Check
- Child Care Fact Check
- Dental and oral health fact check
- Diabetes and metabolic health fact check
- Diet and Nutrition Fact Check
- Eye and ENT Care Fact Check
- Fitness fact check
- Gut health fact check
- Heart health fact check
- Kidney health fact check
- Medical education fact check
- Men's health fact check
- Respiratory fact check
- Skin and hair care fact check
- Vaccine and Immunization fact check
- Women's health fact check
- AYUSH
- State News
- Andaman and Nicobar Islands
- Andhra Pradesh
- Arunachal Pradesh
- Assam
- Bihar
- Chandigarh
- Chattisgarh
- Dadra and Nagar Haveli
- Daman and Diu
- Delhi
- Goa
- Gujarat
- Haryana
- Himachal Pradesh
- Jammu & Kashmir
- Jharkhand
- Karnataka
- Kerala
- Ladakh
- Lakshadweep
- Madhya Pradesh
- Maharashtra
- Manipur
- Meghalaya
- Mizoram
- Nagaland
- Odisha
- Puducherry
- Punjab
- Rajasthan
- Sikkim
- Tamil Nadu
- Telangana
- Tripura
- Uttar Pradesh
- Uttrakhand
- West Bengal
- Medical Education
- Industry
ChatGPT-4 Struggles Interpreting Radiological Anatomy in FRCR Part 1 Mock Exam, Study Finds
India: With artificial intelligence growing in hospitals to improve patient outcomes, Indian doctors conducted a study that evaluated the efficacy of ChatGPT-4 in answering radiological anatomy and found that ChatGPT-4 has underperformed in interpreting normal radiological anatomy. The study was published in the Indian Journal of Radiology and Imaging.
Dr. Pradosh Kumar Sarangi, MD, PDF, EDiR, Department of Radiodiagnosis, AIIMS Deoghar, Jharkhand, and the study's author, explained to Medical Dialogues, "The study evaluated ChatGPT-4's performance in identifying radiological anatomy in mock First Fellowship of the Royal College of Radiologists (FRCR) Part 1 examination questions. While the AI demonstrated 100% accuracy in identifying imaging modalities, it struggled with anatomical structure identification (4-7.5% accuracy) and sidedness (~40%). This highlights its potential as a supplementary educational tool but underscores significant limitations for clinical use.”
Anatomical knowledge is essential in radiology, providing the foundation for accurate image interpretation and diagnosis across imaging modalities like X-rays, CT scans, MRI, and ultrasound. As AI advances, its role in medical education and exam preparation has expanded. OpenAI's ChatGPT-4, known for its capabilities in natural language processing, has shown promise across various medical fields. The FRCR Part 1 examination, a key milestone for radiology trainees, tests their knowledge of radiological anatomy and other core topics.
Dr. Sarangi and his team conducted a study to assess ChatGPT-4's ability to identify radiological anatomy in line with the FRCR Part 1 pattern. They used 100 mock radiological anatomy questions from a free website mimicking the exam. ChatGPT-4 was tested with and without context about the exam instructions. The main question was: “Identify the structure indicated by the arrow(s).” Responses were compared to correct answers, with two expert radiologists ((>5 and 30 years of experience) rating the explanations.
Four scores were assessed: correctness, sidedness, modality identification, and approximation. The latter accounts for partial correctness when the structure identified is present, but not the specific focus of the question.
The study reveals the following findings:
- ChatGPT-4 performed poorly in both testing conditions, with correctness scores of 4% without context and 7.5% with context.
- It identified the imaging modality with 100% accuracy and scored over 50% on the approximation metric by recognizing structures present in the image but not indicated by the arrow.
- ChatGPT-4 had difficulty identifying the correct side of the structure, scoring around 42% without context and 40% with context. only 32% of the responses were consistent across both settings.
Dr. Sarangi explained that “ChatGPT-4’s current limitations in accuracy and reliability restrict its utility for high-stakes clinical applications, emphasizing the need for improved training on domain-specific datasets.”
Discussing the future evolution of AI in radiology, Dr. Sarangi added,” AI in radiology is likely to evolve into hybrid systems integrating visual recognition with language models, improving interpretative skills for complex imaging. Future advancements could include real-time AI support for identifying anatomical structures during imaging reviews, improved clinical decision support tools, and personalized learning platforms for radiology students.”
Reference: Sarangi, P. K., Datta, S., Panda, B. B., Panda, S., & Mondal, H. (2024). Evaluating ChatGPT-4's performance in identifying radiological anatomy in FRCR Part 1 examination questions. Indian Journal of Radiology and Imaging. https://doi.org/10.1055/s-0044-1792040.
BDS, MDS(orthodontics)
Dr. Garima Soni holds a BDS (Bachelor of Dental Surgery) from Government Dental College, Raipur, Chhattisgarh, and an MDS (Master of Dental Surgery) specializing in Orthodontics and Dentofacial Orthopedics from Maitri College of Dentistry and Research Centre. At medical dialogues she focuses on dental news and dental and medical fact checks against medical/dental mis/disinformation