- Home
- Medical news & Guidelines
- Anesthesiology
- Cardiology and CTVS
- Critical Care
- Dentistry
- Dermatology
- Diabetes and Endocrinology
- ENT
- Gastroenterology
- Medicine
- Nephrology
- Neurology
- Obstretics-Gynaecology
- Oncology
- Ophthalmology
- Orthopaedics
- Pediatrics-Neonatology
- Psychiatry
- Pulmonology
- Radiology
- Surgery
- Urology
- Laboratory Medicine
- Diet
- Nursing
- Paramedical
- Physiotherapy
- Health news
- Fact Check
- Bone Health Fact Check
- Brain Health Fact Check
- Cancer Related Fact Check
- Child Care Fact Check
- Dental and oral health fact check
- Diabetes and metabolic health fact check
- Diet and Nutrition Fact Check
- Eye and ENT Care Fact Check
- Fitness fact check
- Gut health fact check
- Heart health fact check
- Kidney health fact check
- Medical education fact check
- Men's health fact check
- Respiratory fact check
- Skin and hair care fact check
- Vaccine and Immunization fact check
- Women's health fact check
- AYUSH
- State News
- Andaman and Nicobar Islands
- Andhra Pradesh
- Arunachal Pradesh
- Assam
- Bihar
- Chandigarh
- Chattisgarh
- Dadra and Nagar Haveli
- Daman and Diu
- Delhi
- Goa
- Gujarat
- Haryana
- Himachal Pradesh
- Jammu & Kashmir
- Jharkhand
- Karnataka
- Kerala
- Ladakh
- Lakshadweep
- Madhya Pradesh
- Maharashtra
- Manipur
- Meghalaya
- Mizoram
- Nagaland
- Odisha
- Puducherry
- Punjab
- Rajasthan
- Sikkim
- Tamil Nadu
- Telangana
- Tripura
- Uttar Pradesh
- Uttrakhand
- West Bengal
- Medical Education
- Industry
Study finds ChatGPT can write medical research abstracts that can trick scientists - Video
Overview
In a recent study, researchers have found that Chat GPT, a large language model based on neural networks, can write such convincing fake abstracts that even scientists could not distinguish them from real ones written by researchers.
Chat GPT was released by California based OpenAI on November 30 and it is currently freely available. The model chatbot learns by processing already-existing human-generated text and gives results based on user prompts.
Researchers have noted that a significant portion of the labour that goes into a significant software-engineering project, such as building a web browser, requires comprehending the demands of the users. With the straightforward, machine-readable specifications that an AI can employ to generate code, these are challenging to define.
The researchers prompted ChatGPT to produce 50 medical research abstracts based on a selection of articles published in the journals JAMA, The New England Journal of Medicine, The BMJ, The Lancet, and Nature Medicine, according to a Nature article on the topic. They then put it through plagiarism and AI output detection tools before asking a group of medical researchers to identify the chatbot-generated abstracts.
The resulting abstracts passed the plagiarism checker with flying colours, and there was no plagiarism detected. The AI-output detector, on the other hand, was able to spot 66 per cent of the fabricated abstracts. On the other hand human researchers were only able to correctly identify nearly 68 per cent of the generated abstracts and 86 per cent of the real abstracts.
To preserve scientific integrity, they contend that journals and medical conferences should enact new regulations that incorporate AI output detectors in the editorial process and full disclosure of the usage of these technologies.
Reference:
Li, Y. et al. Science 378, 1092–1097 (2022), doi: https://doi.org/10.1038/d41586-022-04383-z
Speakers
Isra Zaman
B.Sc Life Sciences, M.Sc Biotechnology, B.Ed