Study finds ChatGPT can write medical research abstracts that can trick scientists

Written By :  Isra Zaman
Medically Reviewed By :  Dr. Kamal Kant Kohli
Published On 2023-01-17 03:30 GMT   |   Update On 2023-01-17 03:30 GMT
Advertisement

In a recent study, researchers have found that Chat GPT, a large language model based on neural networks, can write such convincing fake abstracts that even scientists could not distinguish them from real ones written by researchers.

Chat GPT was released by California based OpenAI on November 30 and it is currently freely available. The model chatbot learns by processing already-existing human-generated text and gives results based on user prompts.

Advertisement

Researchers have noted that a significant portion of the labour that goes into a significant software-engineering project, such as building a web browser, requires comprehending the demands of the users. With the straightforward, machine-readable specifications that an AI can employ to generate code, these are challenging to define.

The researchers prompted ChatGPT to produce 50 medical research abstracts based on a selection of articles published in the journals JAMA, The New England Journal of Medicine, The BMJ, The Lancet, and Nature Medicine, according to a Nature article on the topic. They then put it through plagiarism and AI output detection tools before asking a group of medical researchers to identify the chatbot-generated abstracts.

The resulting abstracts passed the plagiarism checker with flying colours, and there was no plagiarism detected. The AI-output detector, on the other hand, was able to spot 66 per cent of the fabricated abstracts. On the other hand human researchers were only able to correctly identify nearly 68 per cent of the generated abstracts and 86 per cent of the real abstracts.

To preserve scientific integrity, they contend that journals and medical conferences should enact new regulations that incorporate AI output detectors in the editorial process and full disclosure of the usage of these technologies.

Reference:

Li, Y. et al. Science 378, 1092–1097 (2022), doi: https://doi.org/10.1038/d41586-022-04383-z

Full View
Tags:    
Article Source : Science

Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.

NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News