From medical knowledge to accounting expertise, several researchers have put ChatGPT to the test. Now, another study has focused on assessing this chatbot's capabilities once more. This time, the chatbot is evaluated in the field of ophthalmology.
ChatGPT Takes Ophthalmology Study test
According to Science Daily, ChatGPT was tasked to answer a dataset comprising different multiple choice questions from the OpthoQuestions free trial. This study resource test is a common tool used by board takers to prepare for the ophthalmology certification exam.
To make sure that the bot's responses were not impacted by concurrent conversations, conversations with the bot were cleared before each question was inputted. A new ChatGPT account was also used.
Moreover, due to the bot's ability to only accept text, questions that involve videos or images were not included in the pool.
The study was published in the JAMA Ophthalmology journal.
ChatGPT Scores Nearly 50%, Shows Improved Results in Second Take
In total, the bot was given 125 multiple-choice questions to answer. During its first try in January 2023, it was able to correctly answer 58 out of 125 (46%) questions. However, in a repeat analysis on February 23, its performance became better. For its second take, ChatGPT scored 58%, which is 10% higher than its score during the first take.
According to Medical Xpress, the bot's responses also closely aligned with how trainees responded to these study test questions. For 44% of the time, ChatGPT chose the same multiple-choice answers that ophthalmology trainees commonly provided. For 11% of the time, the bot answered the least popular answer for the trainees. It also chose the second most popular 22% of the time and the second least popular option 18% of the time.
In terms of specific ophthalmology fields, the bot had more accurate answers for questions pertaining to general medicine. For this section, it scored 79%. However, it had considerably lower accuracy when it came to subspecialties of ophthalmology. It had 0% correct answers for the retina subspecialty and 20% for the oculoplastics subspecialty.
Despite displaying lacking performance and poor accuracy in answering ophthalmology subspecialty questions, ChatGPT's accuracy when it comes to niche topics could still improve in the future.
Since the chatbot's release in November 2022, interest has spiked when it comes to medicine and exam preparation. Dr. Rajeev H. Muni, the study's principal investigator and a researcher from St. Michaek's Li Ka Shing Knowledge Institute, explains that over time the chatbot may have an increasing role in clinical practice and medical education.
However, based on the investigation, ChatGPT currently did not have sufficient correct answers for it to substantially aid ophthalmology board preparations. Nevertheless, the chatbot's body of knowledge may still quickly evolve over time.
RELATED ARTICLE : AI Doctor? ChatGPT Nearly Passes US Medical Licensing Exam
Check out more news and information on Artificial Intelligence in Science Times.