Both AI and radiologists can get fooled by tampered medical images, study finds
USA: A recent study published in Nature Communications suggests the need for continuing research on safety issues related to the artificial intelligence (AI) model and highlights the need for safety measures in AI. The study found that the AI model was fooled by over two-thirds of fake breast images which makes them prone to cyberattacks.
In recent years while active efforts are being made in advancing AI model development and clinical translation. New safety issues of the AI models are emerging but not much research has been done in this direction.
Shandong Wu, Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA, and colleagues performed a study investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images.
The GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer.
In the authors' experiments, the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples.
"Our experiments showed that highly plausible adversarial samples can be generated on mammogram images by advanced GAN algorithms, and they can induce a deep learning AI model to output a wrong diagnosis of breast cancer," wrote the authors.
"Certified human radiologists can identify such adversarial samples, but they may not be reliable to safely detect all potential adversarial samples, where an education process showed promise to improve their performance in recognizing the adversarial images," they explained. "This poses an imperative need for continuing research on the medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks"
Reference:
Zhou, Q., Zuley, M., Guo, Y. et al. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun 12, 7281 (2021). https://doi.org/10.1038/s41467-021-27577-x
Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.
NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.