Can ChatGPT write health and medical content?
GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing AI model developed by OpenAI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modeling, and generating text for applications such as chatbots. It is one of the largest and most powerful language-processing AI models to date, with 175 billion parameters. Its most common use so far is creating ChatGPT - a highly capable chatbot [1]
The ChatGPT Buzz !!
With the most recent news reporting, OpenAI’s ChatGPT has successfully cleared all three parts of the United States Medical Licensing Examination (US MLE) examination in a single go, as per the results of a new experiment, ChatGPT is clearing exploding as the greatest buzz of the global town [2]
With abilities to generate text, ChatGPT may certainly have implications in areas, of writing, medical writing, medical journalism; where information is vast and complex, and difficult and time-consuming to extract crisp perspectives in content. Thus, it would be interesting how it may influence the lives of medical communications and medico-marketing professionals, as content forms a significant wing in these professions.
So, we did run a series of tasks with ChatGPT, and here are our key learnings listed in the form of QnA:
Can ChatGPT write health content and medical content?
Yes, very certainly.
Can ChatGPT write health content and medical content with reference sources?
This is interesting. We read some authors who reported earlier that ChatGPT couldn’t do this.
Hence, we gave ChatGPT specific instructions to write an article with a reference source. And if you instruct specifically to cite references, it does.
When you ask ChatGPT for some “specific small piece” of medical information with a reference source; For e.g. what is the diagnostic criteria of diabetes? It gives you the correct response with reference. (Refer Figure 1)
Figure 1
Can ChatGPT write longer health and medical content pieces with reference sources?
This is where ChatGPT, at this point, couldn’t match a professional as per our experience. The quality of the medical content output comes out very basic with shallow technical details.
We asked ChatGPT to “write 1000 words on salbutamol in bronchial asthma with source reference cited”. For a molecule so old with so much information, ChatGPT gave us a very basic output of content for medical content standards using two references only (Refer Figure 2)
Figure 2
We went to probe more specifically instructing ChatGPT “to summarise all clinical trial evidence on salbutamol with use of 10 references cited”. The chatbot felt overwhelmed and responded to its inability to do the same (Refer Figure 3)
Figure 3
To further narrow down the scope of the chatbot, we nudged ChatGPT “to summarise all clinical evidence on sitagliptin with use of 5 references” (instead of 10). The chatbot responded with an article draft that was again shallow and lacking the technical depth expected for a medical content target audience. The article contained 5 cited references, of which 3 citations were incorrect and the same i.e. (Reference citations, 3,4,5 were the same) (Refer Figure 4)
Figure 4
Can ChatGPT provide a content strategy and analyze medical information?
The chatbot can give very basic outlines of molecule scope, but certainly not in a position to strategize content at this point. Also, that is not the basic role of ChatGPT (In simple words, it's just an AI language tool)
We asked ChatGPT to compare all molecules of hypertension class- Angiotensin receptor blockers (ARBs), We asked, “What is the difference between different Angiotensin receptor Blockers?” and the information output couldn’t go much beyond basic information (refer Figure 5)
Figure 5
What are the other limitations of ChatGPT in the medical writing, health, medical journalism, and medico-marketing context?
ChatGPT’s current training data has a cut-off date of 2021. Hence, it is not in a position to draft out the latest information. (Refer Figure 6). This currently seems a major limitation for ChatGPT in this space.
Figure 6
However, also noteworthy, that despite being trained up till 2021, the chatbot couldn’t give us significant depth on content output pertaining to sitagliptin, and salbutamol (which are old molecules and have a plethora of resources online). This implies the chatbot is still a significant work in progress on the tech side.
Can ChatGPT develop clinical trial protocols?
ChatGPT cannot develop a customized clinical trial protocol, but it can be a very basic structure content output of a clinical trial protocol throwing up information based on earlier conducted studies on the same molecule (Refer Figure 7) Again, not with any specific analysis or insights.
Figure 7
Can ChatGPT be credited with authorship for writing an article?
This has been a big debate over the last couple of months in the research and scientific publishing community. Most scientific publishing authorities at this time do not allow authorship to ChatGPT.
As reported by certain publishers, it is agreed that AIs such as ChatGPT do not fulfil the criteria for a study author because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list.[3]
Most recently, in the third week of January 2023, The World Association of Medical Editors (WAME) released WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications[4], summarised as below:
- Chatbots cannot be authors.
- Authors should be transparent when chatbots are used and provide information about how they were used.
- Authors are responsible for the work performed by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including for material produced by the chatbot)
- Editors need appropriate tools to help them detect content generated or altered by AI, and these tools must be available regardless of their ability to pay.
Key Messages
- ChatGPT and such applications will likely ease certain facets of the medical content writing journey. ChatGPT can be a great ‘assistant’ if understood and utilized prudently.
- They cannot replace medical communications professionals in a complete sense at this point in time. ChatGPT could certainly complement and synergize with medical content writers, medical journalists, and medico-marketing professionals if the latter could apply their use smartly and appropriately.
- Medical editors and reviewers will be required to review and safeguard the sanctity of health and medical content developed through AI applications, as their use in content development will be inevitable in the coming times. Medical content editors and reviewers will need to be continuously updated in this direction.
- The application of ChatGPT will likely seek significant momentum in the medical content space only once the application throws content output based on the latest sources available online. (ChatGPTs biggest competition is google as they say!)
- Accept ChatGPT and such applications in our professions and lives, study and analyze them, and understand their applications and limitations. Be friends with them to achieve synergy and raise the bar by further stretching our boundaries of contribution.
Way Forward
It would be interesting to see how ChatGPT continues to evolve and upgrade in the coming times. Equally important, it will be exciting to see how professionals, concerned teams, and industry will adapt, make space, and differentially stand out amidst the emergence of ChatGPT. As they say, the only constant in Life is CHANGE!
References
1) Laex Hughes, ChatGPT: Everything you need to know about OpenAI's GPT-3 tool, News Release, 16th January 2023
2) Aparna Iyer, ChatGPT clears the United States Medical Licensing Examination (USMLE), 23rd January 2023,
3) Chris Stokel Walker, NEWS 18 January 2023 ChatGPT listed as author on research papers: many scientists disapprove
4) ChrisZielinski et al, Chatbots, ChatGPT, and Scholarly Manuscripts, WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications, January 2023
Disclaimer: The views expressed in this article are of the author and not of Medical Dialogues. The Editorial/Content team of Medical Dialogues has not contributed to the writing/editing/packaging of this article.
Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.
NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.