ChatGPT has become a massively popular AI chatbot that has been hitting the headlines. Now, Sam Altman, the CEO of OpenAI, which is the company behind the prominent bot, has revealed his ideas regarding how AI could improve life here on earth. More specifically, he says that AI could serve as a medical advisor to those who do not have sufficient funds to see a doctor or access actual healthcare.
AI Medical Advisor?
As per Futurism, he wrote that the tools can help individuals become more productive, healthy, smart, and entertained. Altman specifically said that it can boost productivity by enabling individuals to spend less time with emails; make people healthier by serving as a medical advisor for those who cannot afford a doctor; make others smarter by allowing students to use ChatGPT for learning; and make individuals more entertained with the generation of AI memes.
There are, however, several reasons to question getting serious medical advice from AI. Futurism reports that individuals cannot trust AI enough for medical information and health tips, even if the information is examined by an editor. In fact, not long ago, a health article made by AI was discovered to be filled with factual errors. This just shows that despite the advancement of technology, there are still massive errors and faults in it.
According to Futurism, some errors could be due to mixing up certain details or changing grammar. However, AIs are also known to offer statements that may sound convincing but do not actually have a factual basis.
Altman does acknowledge, however, that AI tools are still broken to some extent and that institutions still need time to find out what they should do about AI.
ChatGPT in the Medical Field
Nevertheless, it is hard to discredit how advanced AI is becoming. ChatGPT, for instance, has already been seen to pass business, law, medical, and science exams. These alone have demonstrated how competitive the bot has become.
While it is too premature to dub it a medical advisor, AI has also been seeping through the medical field. According to Gavi, the bot has also been used by early adopters to help with things such as writing patient letters or sick certificates. In this way, the bot has been serving as a personal assistant of some sort that helps make bureaucratic tasks faster to finish and that lengthens periods for interacting with patients.
There are a number of ethical issues that arise with the incorporation of this bot into patient care. It could lead to undesired consequences. Issues pertain to consent, confidentiality, reliability, inequity, and care quality.
It is still too premature to draw conclusions regarding the ethical implications of using ChatGPT in the research and medical sectors. As the bot gets used more frequently, the clearer the implications will be. Nevertheless, these issues will undoubtedly be the focus of future discussions.
Check out more news and information on Tech and Innovation in Science Times.