From Software to Practitioner-Should Artificial Intelligence (AI) Be Licensed Like Clinicians? JAMA November 2025 Published Viewpoint Explains

Written By :  Dr Jeegar Dattani
Published On 2025-11-25 05:15 GMT   |   Update On 2025-11-25 07:14 GMT
Advertisement

Generative AI systems are rapidly advancing, rendering existing regulatory frameworks insufficient for overseeing their broad clinical capabilities. General-purpose large language models are being utilized in practice and likely meet the definition of medical devices, yet they remain largely unregulated, concluded a recent viewpoint published in November in the Journal of the American Medical Association (JAMA)- Internal Medicine.

Advertisement

Amid an environment signaling uncertainty around AI regulations, adopting a licensure framework offers a more agile, forward-thinking approach. Such a framework, modeled after clinician licensure, is necessary to ensure that innovation in clinical AI scales with accountability.

The Regulatory Mismatch in Clinical AI

The application of generative Artificial Intelligence (AI) models, which are already supporting clinical diagnosis and treatment, is challenging the traditional regulatory framework for medical devices. The US Food and Drug Administration (FDA) has cleared over 1000 AI tools using the Software as a Medical Device (SaMD) framework, primarily for narrow, well-defined tasks. This framework, which often uses the 510(k) pathway to demonstrate "substantial equivalence" to existing devices, is struggling to oversee complex generative models.

The SaMD approach fails for generative AI for three critical reasons:

  1. Dynamic Updates vs. Static Review: Current guidance assumes a static algorithm amenable to a single, one-time review. Generative models, however, are frequently updated across training and deployment cycles, and their behavior changes via fine-tuning or policy updates, creating a mismatch for fixed change-control plans.
  1. Broad Scope vs. Narrow Labeling: SaMD presumes a narrow indication label (e.g., classifying high-risk lung lesions). A general-purpose model, which might interpret computed tomography scans, dose medication, and counsel patients, resists such narrow labeling because it is impossible to prespecify every use case.
  1. Uncertain Accountability: SaMD assumes control by a single manufacturer. Modern models built on open-weight foundations or extended by third-party plug-ins muddy ownership, leaving regulatory accountability uncertain.

A Licensure Model for Continuous Governance

The authors propose that the concept of licensure, historically used to regulate clinicians and maintain public trust amid concerns about competence and misconduct, can be adapted for clinical AI systems. This approach provides continuous governance that static device regulation cannot.

A licensure framework for AI could be built upon the precedent of supervised clinical models, similar to those used for physician assistants or nurse practitioners. The core process would involve:

• Core License: The developer seeks a license for the base model, defining competencies relevant to its intended scope.

• AI Residency: The model would need to meet minimum performance thresholds through technical validation and then undergo a period of supervised training and practice in an accredited clinical environment.

Post-licensure, supervision would be continuous, including regular retesting, reporting of clinical performance measures, and enforcement by a designated discipline board for processing complaints swiftly. Lastly, a public database of disciplined models and corrective action plans can be maintained.

Licensure Prerequisites- Defining Clear Accountability and Liability

Licensure is effective only if accountability is clear when patient harm occurs. The proposed model scales liability with the model’s autonomy:

• For restricted or high-risk functions, liability rests with the supervising clinician and the institution.

• When models are granted a degree of autonomy for lower-risk functions, developers must assume responsibility, including instances of malpractice.

This clear delegation of responsibility would pave the way for functional AI insurance markets. By anchoring oversight in familiar concepts like competency testing, certification maintenance, and transparent discipline, a licensure framework ensures that clinical AI innovation is accompanied by essential accountability.

Reference: Bressman E, Shachar C, Stern AD, Mehrotra A. Software as a Medical Practitioner—Is It Time to License Artificial Intelligence? JAMA Intern Med. Published online November 17, 2025. doi:10.1001/jamainternmed.2025.6132

Tags:    

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News