- Home
- Medical news & Guidelines
- Anesthesiology
- Cardiology and CTVS
- Critical Care
- Dentistry
- Dermatology
- Diabetes and Endocrinology
- ENT
- Gastroenterology
- Medicine
- Nephrology
- Neurology
- Obstretics-Gynaecology
- Oncology
- Ophthalmology
- Orthopaedics
- Pediatrics-Neonatology
- Psychiatry
- Pulmonology
- Radiology
- Surgery
- Urology
- Laboratory Medicine
- Diet
- Nursing
- Paramedical
- Physiotherapy
- Health news
- Fact Check
- Bone Health Fact Check
- Brain Health Fact Check
- Cancer Related Fact Check
- Child Care Fact Check
- Dental and oral health fact check
- Diabetes and metabolic health fact check
- Diet and Nutrition Fact Check
- Eye and ENT Care Fact Check
- Fitness fact check
- Gut health fact check
- Heart health fact check
- Kidney health fact check
- Medical education fact check
- Men's health fact check
- Respiratory fact check
- Skin and hair care fact check
- Vaccine and Immunization fact check
- Women's health fact check
- AYUSH
- State News
- Andaman and Nicobar Islands
- Andhra Pradesh
- Arunachal Pradesh
- Assam
- Bihar
- Chandigarh
- Chattisgarh
- Dadra and Nagar Haveli
- Daman and Diu
- Delhi
- Goa
- Gujarat
- Haryana
- Himachal Pradesh
- Jammu & Kashmir
- Jharkhand
- Karnataka
- Kerala
- Ladakh
- Lakshadweep
- Madhya Pradesh
- Maharashtra
- Manipur
- Meghalaya
- Mizoram
- Nagaland
- Odisha
- Puducherry
- Punjab
- Rajasthan
- Sikkim
- Tamil Nadu
- Telangana
- Tripura
- Uttar Pradesh
- Uttrakhand
- West Bengal
- Medical Education
- Industry
Chat GPT may prouce structured, summarized radiology reports for pancreatic ductal adenocarcinoma: Study
Canada: In a groundbreaking development, large language models (LLMs) are poised to transform the landscape of pancreatic cancer diagnosis and treatment planning. Recent research has demonstrated their efficacy in generating automated synoptic reports and accurately categorizing resectability status based on radiological images.
In their study published in Radiology, the researchers revealed that Chat GPT-4 outperforms GPT-3.5 for creating structured, summarized radiology reports for pancreatic ductal adenocarcinoma (PDAC). They found that GPT-4 created near-perfect PDAC synoptic reports from original reports, GPT-4 with chain-of-thought achieved high accuracy in categorizing resectability, and surgeons were more efficient and accurate when they used AI-generated reports.
"The study results are good news for clinicians and patients, as the AI tool could improve surgical decision-making," Rajesh Bhayana, University of Toronto, ON, Canada, and colleagues wrote.
Pancreatic cancer presents a formidable challenge due to its aggressive nature and often late-stage diagnosis. Accurate assessment of tumor resectability—whether a tumor can be surgically removed—is crucial for determining treatment strategies and patient outcomes. Traditionally, this assessment involves meticulous analysis of radiological scans by trained specialists.
Structured radiology reports for pancreatic ductal adenocarcinoma improve surgical decision-making over free-text reports, but radiologist adoption is variable. Resectability criteria are applied inconsistently. Considering this, the research team aimed to evaluate the performance of LLMs in automatically creating PDAC synoptic reports from original reports and to explore performance in categorizing tumor resectability.
For this purpose, the researchers conducted an institutional review board–approved retrospective study comprising 180 consecutive PDAC staging CT reports on patients referred to the authors’ European Society for Medical Oncology–designated cancer center from January to December 2018. Two radiologists reviewed the reports to establish the reference standard for 14 key findings and the National Comprehensive Cancer Network (NCCN) resectability category.
GPT-3.5 and GPT-4, accessed between September 18 and 29, 2023, were tasked with generating synoptic reports based on original reports using identical 14 features, and their performance was assessed in terms of recall, precision, and F1 score to ensure originality. Three prompting strategies (default knowledge, in-context knowledge, chain-of-thought) were used for both LLMs to categorize resectability.
Hepatopancreaticobiliary surgeons assessed original and artificial intelligence (AI)–-generated reports to evaluate resectability, comparing accuracy and review times.
The researchers reported the following findings:
- GPT-4 outperformed GPT-3.5 in creating synoptic reports (F1 score: 0.997 vs 0.967, respectively).
- Compared with GPT-3.5, GPT-4 achieved equal or higher F1 scores for all 14 extracted features. GPT-4 had higher precision than GPT-3.5 for extracting superior mesenteric artery involvement (100% vs 88.8%, respectively).
- For categorizing resectability, GPT-4 outperformed GPT-3.5 for each prompting strategy.
- For GPT-4, chain-of-thought prompting was most accurate, outperforming in-context knowledge prompting (92% versus 83%, respectively), which outperformed the default knowledge strategy (83% vs 67%).
- Surgeons were more accurate in categorizing resectability using AI-generated reports than original reports (83% vs 76%, respectively), while spending less time on each report (58%).
The findings showed that GPT-4 created near-perfect PDAC synoptic reports from original reports. GPT-4 with chain-of-thought achieved high accuracy in resectability categorization. Surgeons were more efficient and accurate using AI-generated reports.
Reference:
https://doi.org/10.1148/radiol.233117
MSc. Biotechnology
Medha Baranwal joined Medical Dialogues as an Editor in 2018 for Speciality Medical Dialogues. She covers several medical specialties including Cardiac Sciences, Dentistry, Diabetes and Endo, Diagnostics, ENT, Gastroenterology, Neurosciences, and Radiology. She has completed her Bachelors in Biomedical Sciences from DU and then pursued Masters in Biotechnology from Amity University. She has a working experience of 5 years in the field of medical research writing, scientific writing, content writing, and content management. She can be contacted at  editorial@medicaldialogues.in. Contact no. 011-43720751
Dr Kamal Kant Kohli-MBBS, DTCD- a chest specialist with more than 30 years of practice and a flair for writing clinical articles, Dr Kamal Kant Kohli joined Medical Dialogues as a Chief Editor of Medical News. Besides writing articles, as an editor, he proofreads and verifies all the medical content published on Medical Dialogues including those coming from journals, studies,medical conferences,guidelines etc. Email: drkohli@medicaldialogues.in. Contact no. 011-43720751