Study finds ChatGPT can write medical research abstracts that can trick scientists

Written By :  Isra Zaman
Medically Reviewed By :  Dr. Kamal Kant Kohli
Published On 2023-01-17 03:30 GMT   |   Update On 2023-01-17 03:30 GMT

In a recent study, researchers have found that Chat GPT, a large language model based on neural networks, can write such convincing fake abstracts that even scientists could not distinguish them from real ones written by researchers. Chat GPT was released by California based OpenAI on November 30 and it is currently freely available. The model chatbot learns by processing...

Login or Register to read the full article

In a recent study, researchers have found that Chat GPT, a large language model based on neural networks, can write such convincing fake abstracts that even scientists could not distinguish them from real ones written by researchers.

Chat GPT was released by California based OpenAI on November 30 and it is currently freely available. The model chatbot learns by processing already-existing human-generated text and gives results based on user prompts.

Researchers have noted that a significant portion of the labour that goes into a significant software-engineering project, such as building a web browser, requires comprehending the demands of the users. With the straightforward, machine-readable specifications that an AI can employ to generate code, these are challenging to define.

The researchers prompted ChatGPT to produce 50 medical research abstracts based on a selection of articles published in the journals JAMA, The New England Journal of Medicine, The BMJ, The Lancet, and Nature Medicine, according to a Nature article on the topic. They then put it through plagiarism and AI output detection tools before asking a group of medical researchers to identify the chatbot-generated abstracts.

The resulting abstracts passed the plagiarism checker with flying colours, and there was no plagiarism detected. The AI-output detector, on the other hand, was able to spot 66 per cent of the fabricated abstracts. On the other hand human researchers were only able to correctly identify nearly 68 per cent of the generated abstracts and 86 per cent of the real abstracts.

To preserve scientific integrity, they contend that journals and medical conferences should enact new regulations that incorporate AI output detectors in the editorial process and full disclosure of the usage of these technologies.

Reference:

Li, Y. et al. Science 378, 1092–1097 (2022), doi: https://doi.org/10.1038/d41586-022-04383-z

Tags:    
Article Source : Science

Disclaimer: This site is primarily intended for healthcare professionals. Any content/information on this website does not replace the advice of medical and/or health professionals and should not be construed as medical/diagnostic advice/endorsement/treatment or prescription. Use of this site is subject to our terms of use, privacy policy, advertisement policy. © 2024 Minerva Medical Treatment Pvt Ltd

Our comments section is governed by our Comments Policy . By posting comments at Medical Dialogues you automatically agree with our Comments Policy , Terms And Conditions and Privacy Policy .

Similar News