Regulatory Considerations for Artificial Intelligence in Healthcare: A WHO Perspective

Written By :  Dr Onkar Mittal
Published On 2026-04-18 11:25 GMT   |   Update On 2026-04-18 11:25 GMT

The mission of the World Health Organization (WHO) is to promote health, keep the world safe and serve the vulnerable is articulated in its global strategy on digital health 2020–2025. At the heart of this strategy, WHO aims to improve health for everyone, everywhere by accelerating the development and adoption of appropriate, accessible, affordable, scalable and sustainable person-centric digital health solutions in order to prevent, detect and respond to epidemics and pandemics, developing infrastructure and applications. Many international organizations and global players are contributing to this area along with WHO.

These international and regional organizations and national authorities collectively recognize the potential of Artificial Intelligence (AI) in enhancing health outcomes by improving clinical trials, medical diagnosis and treatment, self management of care and personalized care, as well as by creating more evidence-based knowledge, skills and competencies for professionals to support health care. Furthermore, with the increasing availability of health care data and the rapid progress of analytics techniques, AI has the potential to transform the health sector to meet a variety of stakeholders’ needs in health care and therapeutic development.

In order to facilitate the safe and appropriate use of AI technologies for the development of AI systems in health care, the WHO and the International Telecommunication Union (ITU) have established a Focus Group on AI for Health (FG-AI4H). To support its work, FG-AI4H created several working groups, including a Working Group on Regulatory Considerations (WG-RC) on AI for Health. The WG-RC consists of members representing multiple stakeholders – including regulatory authorities, policy-makers, academia and industry – who explored regulatory and health technology assessment concepts and emerging “good practices” for the development and use of AI in health care and therapeutic development. The work of the WG-RC represents a multidisciplinary, international effort to increase dialogue and examine key concepts for the use of AI in health care.

An overview of regulatory considerations on AI for health that covers the following six general topic areas: (i) documentation and transparency, (ii) the total product lifecycle approach and risk management, (iii) intended use and analytical and clinical validation, (iv) data quality, privacy and data protection, and (v)engagement and (vi) collaboration. A discussion of key regulatory considerations and a resource that can be considered by all relevant stakeholders – including developers who are exploring and developing AI systems, regulators and policy-makers who in the process of identifying approaches to manage and facilitate AI systems, manufacturers who design and develop AI-enabled medical devices, and health practitioners who deploy and use such medical devices and AI systems. Consequently, the WG-RC recommends that stakeholders take into account the following considerations as they continue to develop frameworks and best practices for the use of AI in health care and therapeutic development:

REGULATORY CONSIDERATIONS ON ARTIFICIAL INTELLIGENCE FOR HEALTH

Documentation and transparency: Pre-specifying and documenting the intended medical purpose and development process – such as the selection and use of datasets, reference standards, parameters, metrics, deviations from original plans and updates during the phases of development – should be considered in a manner that allows for the tracing of the development steps as appropriate. A risk-based approach should be considered also for the level of documentation and record-keeping utilized for the development and validation of AI systems.

Risk management and AI systems development lifecycle approaches: A total product lifecycle approach should be considered throughout all phases in the life of an AI system, namely: pre-market development management, post-market surveillance and change management. In addition, it is essential to consider a risk management approach that addresses risks associated with AI systems, such as cybersecurity threats and vulnerabilities, underfitting, algorithmic bias etc.

Intended use, and analytical and clinical validation: Initially, providing transparent documentation of the intended use of the AI system should be considered. Details of the training dataset composition underpinning an AI system – including size, setting and population, input and output data and demographic composition – should be transparently documented and provided to users. In addition, it is key to consider demonstrating performance beyond the training and testing data through external analytical validation in an independent dataset. This external validation dataset should be representative of the population and setting in which it is intended to deploy the AI system and should be independent of the dataset used for developing the AI model during training and testing. Transparent documentation of the external dataset and performance metrics should be provided. Furthermore, it is important to consider a graded set of requirements for clinical validation based on risk. Randomized clinical trials are the gold standard for evaluation of comparative clinical performance and could be appropriate for the highest-risk tools or where the highest standard of evidence is required. In other situations, prospective validation can be considered in a real-world deployment and implementation trial which includes a relevant comparator that uses accepted groups. Finally, a period of more intense post-deployment monitoring should be considered through post-market surveillance and market surveillance for AI systems.

Data quality: Developers should consider whether available data are of sufficient quality to support the development of the AI system to achieve the intended purpose. Furthermore, developers should consider deploying rigorous pre-release evaluations for AI systems to ensure that they will not amplify any of the issues such as biases and errors. Careful design or prompt troubleshooting can help identify data quality issues early and can prevent or mitigate possible resulting harm. Stakeholders should also consider mitigating data quality issues and the associated risks that arise in health-care data, as well as continue to work to create data ecosystems to facilitate the sharing of good-quality data sources.

Privacy and data protection: Privacy and data protection should be considered during the design and deployment of AI systems. Early in the development process, developers should consider gaining a good understanding of applicable data protection regulations and privacy laws and should ensure that the development process meets or exceeds such legal requirements. It is also important to consider implementing a compliance programme that addresses risks and ensures that the privacy and cybersecurity practices take into account potential harm as well as the enforcement environment.

Engagement and collaboration: During development of the AI innovation and deployment roadmap it is important to consider the development of accessible and informative platforms that facilitate Regulatory considerations on artificial intelligence for health engagement and collaboration among key stakeholders, where applicable and appropriate. It is fundamental to consider streamlining the oversight process for AI regulation through such engagement and collaboration in order to accelerate practice-changing advances in AI.

Finally, the WG-RC has provided a forum for regulators and subject matter experts to discuss regulatory considerations for the use of AI technologies and development of AI systems for health and medical purposes. The WG-RC recognizes that the AI landscape is evolving rapidly and that the considerations in this deliverable may require expansion as technology and its uses develop. The working group recommends that stakeholders, including regulators and developers, continue to engage and that the community at large works towards shared understanding and mutual learning.

In addition, established national and international groups, such as the International Medical Device Regulators Forum (IMDRF) and the International Coalition of Medicines Regulatory Authorities (ICMRA) should continue to work on topics of AI for potential regulatory convergence and harmonization.

In this evolving landscape, initiatives like the National AI Doctors Mission (NAIDM) play a crucial role in preparing healthcare professionals for the future—register now and be part of India’s AI-ready healthcare ecosystem https://medicaldialogues.in/events/health-ai-con/registration-offer-50

Reference:

1. World Health Organization. Regulatory Considerations on Artificial Intelligence for Health. Geneva: WHO; 2023.

Tags:    

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News