Evidence gaps remain for AI eye imaging devices approved for patient care

Published On 2025-08-08 15:30 GMT   |   Update On 2025-08-08 15:30 GMT
Advertisement

Regulator-approved AI models used in eye care vary widely in providing evidence for clinical performance and lack transparency about training data, including details of gender, age and ethnicity, according to a new review led by researchers at UCL (University College London) and Moorfields Eye Hospital.

The analysis, published in the journal npj Digital Medicine, examined 36 regulator-approved “artificial intelligence as a medical device” (AIaMD) tools in Europe, Australia and the US, and found concerning trends.

Of the devices reviewed, 19% had no published peer-reviewed data on accuracy or outcomes. In evaluating the available evidence for the remainder, the researchers found that across 131 clinical evaluations, only 52% of studies reported patient age, 51% reported sex, and only 21% reported ethnicity. The review also highlights that most validation used archival image sets, with limited diversity or inadequate reporting of basic demographic characteristics and uneven geographical distributions.

Advertisement

Very few studies compared the AI tools head-to-head with each other (8%) or with the standard of care of human doctors (22%). Strikingly, only 11 of the 131 studies (8%) were interventional – the kind that test devices in real-life clinical settings and affect clinical care. This means real-world validation is still scarce.

More than two-thirds of the AI tools target diabetic retinopathy in a screening context, either singly or together with glaucoma and macular degeneration, while other common sight-threatening conditions and settings remain largely unaddressed.

Almost all the devices examined (97%) are approved in the European Union, but only 22% have Australian clearance with just 8% are authorised in the U.S. This uneven regulatory landscape means devices cleared on one continent may not meet standards elsewhere.

The authors argue these shortcomings must be addressed. They call for rigorous, transparent evidence and data that meets the FAIR principles of Findability, Accessibility, Interoperability, and Reusability, since lack of transparency can hide biases.

Lead author Dr Ariel Ong (UCL Institute of Ophthalmology and Moorfields Eye Hospital NHS Foundation Trust) said: “AI has the potential to help fill the global gap in eye care. In many parts of the world, there simply aren’t enough eye specialists, leading to delayed diagnoses and preventable vision loss. AI screening could help identify disease earlier and support clinical management, but only if the AI is built on solid foundations.

“We must hold AI tools to the same high standards of evidence as any medical test or drug. Facilitating greater transparency from manufacturers, validation across diverse populations, and high-quality interventional studies with implementation-focused outcomes are key steps towards building user confidence and supporting clinical integration.”

Senior author Jeffry Hogg, from the University of Birmingham, said: “Our review found that the evidence available to evaluate the effectiveness of individual AIaMDs is extremely variable, with limited data on how these devices work in the real world. Greater emphasis should be placed on accurate and transparent reporting of datasets. This is critical to ensuring devices work equally well for all people, as some populations may be underrepresented in the training data.”

In practical terms, the study suggests several next steps. The authors encourage manufacturers and regulators to adopt standardised reporting – for example, publishing detailed “model cards” or trial results at each stage of development. They note that regulatory frameworks for AIaMDs may benefit from a more standardised approach to evidence reporting, which would give clarity to both device developers and end users. The review also highlights new guidance, such as the EU AI Act, that could raise the bar for data diversity and real-world trials.

The researchers hope their work will inform policymakers and industry leaders to ensure that AI in eye care is both equitable and effective. Robust oversight, they argue, will help deliver on the promise of faster, more accurate eye disease detection—without leaving any patient group behind.

Reference:

Ong, A.Y., Taribagil, P., Sevgi, M. et al. A scoping review of artificial intelligence as a medical device for ophthalmic image analysis in Europe, Australia and America. npj Digit. Med. 8, 323 (2025). https://doi.org/10.1038/s41746-025-01726-8

Tags:    
Article Source : npj Digital Medicine

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News