AI language models resembling the chatbot ChatGPT from OpenAI will likely transform the academic community. The conversational AI-powered chatbots used in Internet search engines like Google's Bard and Microsoft's Bing are on their way to changing the course of scientific search.
A Tool for Enhanced Academic Research
On August 1, a ChatGPT-powered AI interface was released by the Dutch academic publishing company Elsevier for some users of its Scopus database. Meanwhile, technology company Digital Science revealed a closed trial of its Dimensions database with an AI large language model (LLM) assistant. Analytics company Clarivate also joined the trend by incorporating LLMs into its Web of Science database.
Using LLMs for scientific search is not a new trend. Start-up companies like Scite, Elicit, and Consensus already have AI systems to help them summarize the findings in a field or identify top studies. These companies rely on free science databases or gain access to paywalled research articles by collaborating with publishers. However, the companies that keep considerable propriety of scientific abstracts and reference databases decided to join the AI rush.
Scopus AI, launched by Elsevier as a pilot chatbot, aims to serve as a light, playful tool in helping researchers obtain summaries of the topics that they are unfamiliar with. The chatbot uses a version of the LLM GPT-3.5 to respond to a natural-language question. In this system, the search engine provides a fluent summary paragraph regarding the research topic, cited references, and additional questions to explore.
On the other hand, the AI assistant introduced by Digital Science for its large Dimensions scientific database is only for selected beta testers. What differentiates it from Scopus AI is that the search engine retrieves relevant articles after a user types a question. Then an Open AI GPT model produces a summary paragraph using the retrieved top-ranked abstracts.
At Clarivate, the company works on adding LLM-powered search in Web of Science through its strategic partnership with AI21 Labs. However, the academia and government segment president Bar Weinstein has not yet given a timeline for releasing the LLM-based tool.
Is LLM-Powered Search Reliable?
Skeptics raise concerns regarding using LLMs for scientific research due to reliability issues. LLMs do not understand the text that they generate, and they work by putting together words that they think are reasonable. Because of this, their search output could contain factual errors and biases, which can make up non-existent references. LLMs are also limited to summarizing relevant information retrieved by other search engines.
Adding to the problems of reliability and accuracy is the lack of transparency. In typical online research, search engines provide users with a list of links, and the users can decide what sources to trust. On the other hand, the kind of data an LLM distrained on is rarely known. This lack of transparency could have significant implications if the language model fails, spreads misinformation, or hallucinates.
RELATED ARTICLE : AI Chatbot Supported and Validated a 21-Year-Old in His Plan To Assassinate the Queen
Check out more news and information on AI Chatbot in Science Times.