- Home
- Medical news & Guidelines
- Anesthesiology
- Cardiology and CTVS
- Critical Care
- Dentistry
- Dermatology
- Diabetes and Endocrinology
- ENT
- Gastroenterology
- Medicine
- Nephrology
- Neurology
- Obstretics-Gynaecology
- Oncology
- Ophthalmology
- Orthopaedics
- Pediatrics-Neonatology
- Psychiatry
- Pulmonology
- Radiology
- Surgery
- Urology
- Laboratory Medicine
- Diet
- Nursing
- Paramedical
- Physiotherapy
- Health news
- Fact Check
- Bone Health Fact Check
- Brain Health Fact Check
- Cancer Related Fact Check
- Child Care Fact Check
- Dental and oral health fact check
- Diabetes and metabolic health fact check
- Diet and Nutrition Fact Check
- Eye and ENT Care Fact Check
- Fitness fact check
- Gut health fact check
- Heart health fact check
- Kidney health fact check
- Medical education fact check
- Men's health fact check
- Respiratory fact check
- Skin and hair care fact check
- Vaccine and Immunization fact check
- Women's health fact check
- AYUSH
- State News
- Andaman and Nicobar Islands
- Andhra Pradesh
- Arunachal Pradesh
- Assam
- Bihar
- Chandigarh
- Chattisgarh
- Dadra and Nagar Haveli
- Daman and Diu
- Delhi
- Goa
- Gujarat
- Haryana
- Himachal Pradesh
- Jammu & Kashmir
- Jharkhand
- Karnataka
- Kerala
- Ladakh
- Lakshadweep
- Madhya Pradesh
- Maharashtra
- Manipur
- Meghalaya
- Mizoram
- Nagaland
- Odisha
- Puducherry
- Punjab
- Rajasthan
- Sikkim
- Tamil Nadu
- Telangana
- Tripura
- Uttar Pradesh
- Uttrakhand
- West Bengal
- Medical Education
- Industry
Use and Perceptions of AI Chatbots in the Medical Research Community: International Survey, Cureus

A recent international survey concluded that medical researchers have a positive attitude toward using AI chatbots, but ethical and accuracy concerns require further interventions to create systematic, unified rules.
This international cross-sectional survey was published in January 2026 in Cureus.
Applying AI Guidelines in Research Journey – Differing Stance
Artificial Intelligence (AI) driven large language models (LLMs) like Google Bard, Gemini, Bing AI, and ChatGPT are designed to generate human-like responses. They assist the scientific community with literature reviews, writing, data analysis, and citations. While guidelines exist for AI chatbot use in research, acceptance varies among publishers: Springer Nature and Science reject ChatGPT as a coauthor, while many Elsevier journals permit its disclosed use. Studies have shown that ChatGPT produces coherent writing with low plagiarism but faces challenges with accuracy, fabricated references, and ethical concerns.
Study Overview
An observational, cross-sectional survey was conducted to assess the use and perceptions of AI chatbots among 434 medical researchers. The survey was administered online and targeted participants across multiple countries. Medical researchers who had either published at least one study or were currently involved in a medical research project and resided in Saudi Arabia, Nigeria, Tunisia, or the United Kingdom (England), regardless of nationality, were included. Those who had never conducted or contributed to a research project, those outside the countries mentioned, and those not in the medical field were excluded. The primary outcomes included self-reported use of AI chatbots in research (binary: yes/no) and perceptions of AI chatbots’ impact on research. Additional outcome measures included participants’ ethical stances (e.g., whether they believe guidelines are needed) and future intentions regarding AI chatbot use. Key explanatory variables included participants’ demographic characteristics, such as age group, gender, country, and professional role.
Key Findings
- Of the 43 participants,175 (40.3%) reported using AI chatbots in their research.
- Use varied by country (32.8%-45.9%); however, neither gender nor country was significantly associated with use. Older age and more senior roles were associated with lower odds of use (odds ratio (OR): ages 41-50 years, 0.32; residents, 0.31; consultants, 0.17; P ≤ 0.009)
- Awareness strongly predicted use (OR 15.53), as did guideline awareness (OR 2.47), trust (P = 0.005), hypothesis formation (P = 0.001), willingness to cite (P = 0.003), and future use (P < 0.001); intention to declare use during submission did not differ (P = 0.468)
Possible Medical Researcher & Stakeholder Implications
AI has gained attention in scientific publishing for its ability to generate human-like text. Its strength lies in its ability to process large volumes of text quickly, potentially reducing researchers' workloads. ChatGPT and similar tools trained on vast text corpora support language tasks at scale. These models can automate previously manual tasks, such as reviewing papers and extracting key elements. AI chatbots can benefit authors when used responsibly. While they cannot replace subject-matter expertise, they may assist in drafting descriptions, organising manuscripts, supporting literature tasks, and refining research questions. However, the risks include a lack of context, inaccuracy, and bias in the outputs.
Reference: Alturaiki H M, Al Khamees M M, Alradhi H A, et al. The Use and Perceptions of AI Chatbots in Medical Research: An International Cross-Sectional Survey. Cureus 18(1): e100908. Published January 06, 2026. DOI 10.7759/cureus.100908
Dr Bhumika Maikhuri is an orthodontist with 2 years of clinical experience. She is also working as a medical writer and anchor at Medical Dialogues. She has completed her BDS from Dr D.Y. Patil Medical College and Hospital and MDS from Kalinga Institute of Dental Sciences. She has a few publications and patents to her credit. Her diverse background in clinical dentistry and academic research uniquely positions her to contribute meaningfully to our team.

