Why ChatGPT is not working and getting worse

AI should – as the name suggests – be intelligent, learn new things and get better and better over time. However, as a study now reveals, there was no increase in performance at ChatGPT between March and June. On the contrary: ChatGPT is not working properly and has even gotten worse .

ChatGPT getting worse

According to a survey conducted around half a year ago, every fourth person in the USA already knows ChatGPT or even uses AI. In the meantime, there are probably even more people working with AI. Have the AI write an essay for the university, formulate an application for a new job or write a letter to a customer – the possibilities for which the chatbot can be considered are diverse.

But: If you have also been working with ChatGPT lately and were not completely satisfied with the performance of the AI, then you can now be sure that you did not imagine it.

The ChatGPT AI chatbot system released by OpenAI in November 2022 can do much more than just generate answers to complex questions. Despite its immense potential, the bot is not free from weaknesses – there is also a need for legal clarification.

ChatGPT isn’t getting better, it’s getting worse

As researchers from Stanford and Berkeley universities have revealed in a new paper, ChatGPT has not improved over time. In contrast: In fact, the new study shows that the current GPT-4 model is not working properly and performed getting worse and worse over time on the tested tasks.

In their research, the scientists analyzed in particular the change in the nature of ChatGPT’s responses and found that the performance of the underlying AI models GPT-3.5 and GPT-4 actually “vary greatly“.

They developed rigorous benchmark tests to assess ChatGPT’s proficiency in math, coding, and visual brain teasers. The frightening result: In fact, the current GPT-4 model even shows a drop in performance.

ChatGPT is a math genius? Are you kidding me? Are you serious when you say that?

An example: In March, ChatGPT was able to correctly solve 488 of 500 questions in a mathematical challenge to determine prime numbers, which corresponds to an accuracy of 97.6 per cent. In June, on the other hand, ChatGPT was only able to correctly answer 12 questions, which corresponds to an accuracy level of just 2.4 per cent. The decline was particularly noticeable in the chatbot’s software coding capabilities.

The study revealed that GPT-4’s directly executable code decreased from 52% in March to 10% in June. These results were obtained using the pure version of the models which means plugins were used.

The ChatGPT researchers also wanted to know whether 17,077 is a prime number. Although the answer to that is yes, ChatGPT saw an extreme drop in accuracy of 95.2 per cent. On the other hand, the hit rate for the same question in the free version of ChatGPT, GPT-3.5, increased from 7.4 to 86.8 per cent.

How did ChatGPT drop in performance?

Researchers suspect that this could be a side effect of optimizations made by OpenAI, the creator of the model. One possible cause is changes introduced to prevent ChatGPT from answering dangerous questions.

However, these security measures could affect ChatGPT’s usefulness for other tasks. The researchers observed the model’s inclination toward providing wordy and evasive responses rather than direct ones.

Look what are Experts’ views on ChatGPT performance

“GPT-4 gets worse over time, not better,” AI expert Santiago Valderrama wrote on Twitter. Valderrama also raised the possibility that a “cheaper and faster” mix of models could have replaced the original ChatGPT architecture.

“Rumor has it that they use several smaller and specialized GPT-4 models that function similarly to one large model but are cheaper to run,” he speculated. This could, he believes, speed up response times for users, but reduce proficiency.”

Another expert, Dr. Jim Fan, also shared his findings in a Twitter thread.

It continues: “My guess (no evidence, just speculation) is that OpenAI spent the majority of its efforts constraining the model from March to June and did not have time to fully restore the other relevant capabilities.”

What does OpenAI say about this?

Peter Welinder, manager at OpenAI, tweeted in response to the allegations that ChatGPT was getting worse: “No, we didn’t make GPT-4 dumber. On the contrary: we make each new version smarter than the previous one.”

Free Ways to Use the AI Chatbot

OpenAI’s ChatGPT Android App Set to Launch Next Week

Top 5 free ChatGPT extensions for Chrome

Is ChatGPT getting worse day by day?

As researchers from Stanford and Berkeley universities have revealed in a new paper. ChatGPT has not improved over time. In contrast: In fact, the new study shows that the current GPT-4 model performed worse and worse over time on the tested tasks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top