A day after this year's Thanksgiving, one user of ChatGPT got a humanly lazy response from the bot. More specifically, ChatGPOT told the user to just fill in the remaining data.
AI Getting Lazy?
The OpenAI makers of ChatGPT have since received various complaints regarding their generative AI model having a sluggish behavior as of late. The AI bot apparently granted very simple responses or even refused to follow commands altogether. This led to jokes, along with some data analysis, that the bot could be experiencing seasonal depression.
The ChatGPT team posted on X that they have heard the feedback regarding the growing laziness of GPT4, noting further that they have not updated the model since November 11. The team also adds that the behavior of the model is quite unpredictable and that they are looking into addressing the issue.
we've heard all your feedback about GPT4 getting lazier! we haven't updated the model since Nov 11th, and this certainly isn't intentional. model behavior can be unpredictable, and we're looking into fixing it 🫡
— ChatGPT (@ChatGPTapp) December 8, 2023
ALSO READ: AI Author? Man Writes Nearly 100 Books Using ChatGPT and Claude, Earns Roughly $2,000 Through Them
ChatGPT's Winter Blues
Interestingly, Rob Lynch, an AI and large language model (LLM) researcher, conducted an experiment asking the latest LLM model of ChatGPT, GPT4 Turbo, to conduct tasks as if it were either May or December. The results came as a shock to him.
Lynch posted on X that the test was ran 477 for both the control group of May tasks and the experimental tasks for December. For all 954 tests, his prompt referred to a request for code completion.
He reports that the results were wild, elaborating further that the GPT-4 Turbot through the API comes up with shorter completions when thinking in December as compared to thinking in May. This was found to be statistically significant. Lynch expresses a desire to see if this case can be observed by others.
@ChatGPTapp @OpenAI @tszzl @emollick @voooooogel Wild result. gpt-4-turbo over the API produces (statistically significant) shorter completions when it "thinks" its December vs. when it thinks its May (as determined by the date in the system prompt).
I took the same exact prompt… pic.twitter.com/mA7sqZUA0r— Rob Lynch (@RobLynch99) December 11, 2023
With this, one user of X reacted by saying that the AI Winter Break Hypothesis could be true.
Mike Swoopskee, another ChatGPT user, also suggests that the bot might have learned through training data that individuals slow down during the last month of the year and hold off bigger projects until the next year, adding the question of whether this could be why ChatGPT has become more lazy recently.
However, according to AI researcher Ian Arawjo, the test was flawed on a fundamental basis and that it was not possible to reproduce the results, adding that ChatGPT does not experience seasonal affective disorder.
Update: Still can't reproduce at N=240. *However*, discovered a possible reason: LLM responses are *not normally distributed* (at p<0.05 according to Shapiro-Wilk test). Thus, we can't use a t-test to compare means. TLDR: There is no "seasonal affective disorder" of ChatGPT. https://t.co/R3g0Qqn1SW pic.twitter.com/Y40aAfJqWU
— Ian Arawjo (@ianarawjo@hci.social) (@IanArawjo) December 12, 2023
Nevertheless, Frank McGovern, the founder of a cybersecurity firm, expresses in a post in X that how ChatGPT gets lazy on its own and becomes tired of answering questions and doing the work of other individuals is altering his mind regarding how artificial general intelligence works.
RELATED ARTICLE: Doctors Warn Against Using ChatGPT for Medical Advice; AI Chatbot Unreliable, Makes up Health Data
Check out more news and information on Tech & Innovation in Science Times.