Both AI and radiologists can get fooled by tampered medical images, study finds

Written By :  Medha Baranwal
Medically Reviewed By :  Dr. Kamal Kant Kohli
Published On 2021-12-17 03:30 GMT   |   Update On 2021-12-17 03:30 GMT

USA: A recent study published in Nature Communications suggests the need for continuing research on safety issues related to the artificial intelligence (AI) model and highlights the need for safety measures in AI. The study found that the AI model was fooled by over two-thirds of fake breast images which makes them prone to cyberattacks. In recent years while active efforts are being made...

Login or Register to read the full article

USA: A recent study published in Nature Communications suggests the need for continuing research on safety issues related to the artificial intelligence (AI) model and highlights the need for safety measures in AI. The study found that the AI model was fooled by over two-thirds of fake breast images which makes them prone to cyberattacks. 

In recent years while active efforts are being made in advancing AI model development and clinical translation. New safety issues of the AI models are emerging but not much research has been done in this direction.

Shandong Wu, Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA, and colleagues performed a study investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. 

The GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer.

In the authors' experiments, the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. 

"Our experiments showed that highly plausible adversarial samples can be generated on mammogram images by advanced GAN algorithms, and they can induce a deep learning AI model to output a wrong diagnosis of breast cancer," wrote the authors.

"Certified human radiologists can identify such adversarial samples, but they may not be reliable to safely detect all potential adversarial samples, where an education process showed promise to improve their performance in recognizing the adversarial images," they explained. "This poses an imperative need for continuing research on the medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks"

Reference:

Zhou, Q., Zuley, M., Guo, Y. et al. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun 12, 7281 (2021). https://doi.org/10.1038/s41467-021-27577-x

Tags:    
Article Source : Nature Communications

Disclaimer: This site is primarily intended for healthcare professionals. Any content/information on this website does not replace the advice of medical and/or health professionals and should not be construed as medical/diagnostic advice/endorsement/treatment or prescription. Use of this site is subject to our terms of use, privacy policy, advertisement policy. © 2024 Minerva Medical Treatment Pvt Ltd

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News