Both AI and radiologists can get fooled by tampered medical images, study finds
Written By : Medha Baranwal
Medically Reviewed By : Dr. Kamal Kant Kohli
Published On 2021-12-17 03:30 GMT | Update On 2021-12-17 03:30 GMT
Advertisement
USA: A recent study published in Nature Communications suggests the need for continuing research on safety issues related to the artificial intelligence (AI) model and highlights the need for safety measures in AI. The study found that the AI model was fooled by over two-thirds of fake breast images which makes them prone to cyberattacks.
In recent years while active efforts are being made in advancing AI model development and clinical translation. New safety issues of the AI models are emerging but not much research has been done in this direction.
Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .
Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.
NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.