Artificial Intelligence (AI) in Health and Healthcare Explained: Top Highlights from JAMA Summit

Written By :  Dr. Bhumika Maikhuri
Published On 2025-11-12 05:15 GMT   |   Update On 2025-11-12 05:15 GMT
Advertisement

Artificial intelligence (AI) is set to massively disrupt every aspect of health and health care delivery in the coming years, presenting an incredible opportunity to address long-standing challenges in the system. However, the probability that this disruption will ultimately improve health outcomes for everyone depends critically on the creation of a sophisticated ecosystem, concluded a recent article published in the special communication section on AI in Medicine in JAMA.

Advertisement

The article highlights that this ecosystem must be capable of generating rapid, efficient, robust, and generalizable knowledge about the actual consequences of these AI tools on health.

Artificial Intelligence (AI) is rapidly transforming every aspect of health care, offering immense promise but equally significant risks. While many AI tools, especially in medical imaging, are already in use, their real-world impact on patient outcomes is rarely assessed. Unlike traditional drugs or devices, AI technologies lack standardized frameworks for evaluation, regulation, and monitoring. This article explores how health care AI should be responsibly developed, evaluated, and governed to ensure safety and effectiveness.

The top highlights from the JAMA Summit report include:

I. The Diverse Landscape of AI Tools

AI tools influencing health and health care can be categorized by their function:

Clinical Tools: These directly support health care professionals’ clinical activities, such as AI software for automated diabetic retinopathy screening, or machine learning algorithms embedded in the electronic health record (EHR) providing sepsis alerts. More than 1200 AI tools have received US Food and Drug Administration (FDA) clearance, with the majority in medical imaging. Dissemination outside of medical imaging and EHR-embedded applications is slow, primarily because health care systems question their value, citing high costs for licensing, training, maintenance, and monitoring, often without guaranteed reimbursement.

Direct-to-Consumer (DTC) Tools: These are used by individuals for health or wellness concerns without necessarily engaging a health care professional. Examples include chatbots for mental health support or algorithms using smartwatch biosensor data to detect arrhythmias. Many DTC tools, such as mobile health apps (over 350,000 exist), are often labeled as low-risk general wellness products and thus avoid strict regulatory or reporting requirements, limiting data on their actual health effects.

Health Care Business Operations Tools: Health care systems are rapidly adopting these to boost system efficiencies and operating margins. Examples include software for optimizing bed capacity, patient and staff scheduling, and revenue cycle management. The consequences for patients when health care organizations adopt these tools are not well understood. For instance, physicians have expressed concerns that insurers’ use of AI tools to deny prior authorization requests negatively affects patients, yet peer-reviewed evaluations are rare.

Hybrid Tools: These serve multiple purposes, supporting both business operations and clinical care, such as ambient AI scribes that transcribe patient-clinician conversations to generate notes, bills, and potentially suggest diagnoses. Hybrid tools are being adopted very rapidly by health care systems. Evaluations of these tools largely focus on clinician satisfaction and workflow rather than on effects on patient outcomes or health care quality.

II. Urgent Challenges in Governance

The effectiveness of AI tools is highly dependent on implementation factors like the human-computer interface, user training, and the context of use, complicating evaluations. The report identifies several critical challenges:

• Lack of Effectiveness Monitoring: Numerous initiatives exist for responsible AI use, but most prioritize monitoring for safety and institutional compliance (e.g., detecting model hallucinations) and do not address effectiveness (demonstration of improved outcomes).

• Evaluation Responsibility is Unclear: It is often unclear who is responsible for conducting evaluations of real-world health effects. Developers may focus on metrics like subscriber loyalty or revenue improvement, not patient health consequences.

• Insufficient Regulatory Framework: The US lacks a comprehensive, fit-for-purpose regulatory framework for health and health care AI. Legislative exclusions mean that many AI tools used for administrative support or general wellness are exempt from FDA regulation.

• Impracticality of Current Evaluation Methods: Relying on traditional randomized clinical trials (RCTs) is impractical given the rapid development rate of AI tools, as each tool might require multiple RCTs across various use cases.

III. The Four Priority Areas for Progress

To ensure AI improves health for all, the report emphasizes progress in four strategic areas:

  1. Engage Stakeholders in Total Product Life Cycle Management: This requires holistic, continuous, multi stakeholder, team-based management across the entire life cycle of AI tools. Greater partnership between end users (patients/clinicians) and developers in design, and between developers, regulators, and health care systems during evaluation and deployment, is necessary.
  1. Develop and Implement Proper Evaluation and Monitoring Tools:
    Novel methods and expertise are needed to allow health care systems to conduct rapid, efficient, and robust causal inference evaluations of effectiveness. This includes promoting standards like CONSORT-AI and DECIDE-AI and particularly for business operations tools.
  1. Build the Proper Data Infrastructure and Learning Environment: Significant investment is required to create a nationally representative data infrastructure and analytic capacity. This infrastructure, potentially involving a federated data approach, would support the generation of generalizable knowledge about AI tool effects across diverse settings and populations.
  1. Create the Right Incentive Structure: Market forces and policy levers must be promoted to drive these fundamental changes. Current incentives may be misaligned. The federal government may need to provide incentives, similar to the HITECH Act, to encourage health care systems to invest in necessary digital infrastructure and participate in learning initiatives.

The article underscores that while AI holds transformative potential for health care, realizing its benefits will depend on building an ecosystem rooted in accountability, transparency, and collaboration. Establishing clear regulatory frameworks, robust evaluation methods, and strong data infrastructure will be essential to ensure that AI innovations enhance, not endanger, patient care. As the authors emphasize, the challenge now is not whether AI will reshape health care, but whether it will do so safely, equitably, and effectively.

Reference: Angus DC, Khera R, Lieu T, et al. AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence. JAMA. Published online October 13, 2025. doi:10.1001/jama.2025.18490

Tags:    

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News