I use and have been using chatGPT for programming for months. In the past 48 hours ChatGPT4 has gone from being mostly right to almost obnoxiously wrong. Like it cant remember context even a few chats deep. Is any one else getting this?
“I appreciate your feedback. The performance of the model can vary based on various factors, including the complexity of the task, the context, and the quality of the training data. If you’ve noticed a decline in performance, it’s possible that the model’s training data or fine-tuning has been updated since my last knowledge update in January 2022. OpenAI continually works to improve its models, but there can be occasional fluctuations in performance. If you’re encountering issues, it can be helpful to provide clear and specific context to get more accurate responses. I recommend giving it some time, as updates and improvements are ongoing. If you have any specific questions or issues you’d like assistance with, please feel free to ask.”
I don’t know about recent events, or chatgpt4 lately, but i reallly thing it has gotten far worse in the latest months, around the time when issues about protection of jobs against AI and ethical were arising.
I started web development in the last year and was desperately looking for a job, i learned React and Django and was really bad at those as well.
I just got a chance to land a fullstack job near my home, it required Node and Angular, liked my interview and wanted to test me in 2 weeks.I really put a lot of effort and hours myself, followed documentations and tutorials, but i kid you not, chatgpt got me out of a looot of stupid issues i could not figure out, either syntax, or complete explanation of what i was wrong about, (i really don’t like and use copy-paste of its code, just because i needed to learn, not complete the project).
Landed that job with my boss even congratulating me on my assignment and got around with my tasks better than what i used to do with my own code.
Lately I feel it is waaay worse, maybe it’s me using it just to find out something i can’t already find anywhere else, but it’s not uncommon I paste in some code, and he spits out word for word the same thing i’m telling it is not working
Weirdly, bard seems to have gotten significantly better around the same time. Are we just getting used to the tools and there’s homogenisation of experience going on or what?!
One very concrete change I’ve noticed in the past 48 hours. When I would prompt DALL-E 3 via chat GPT4, I used to always get 4 images in response, but in the last day or two, it will only give 2 images max.
Obviously this isn’t an example of the model getting qualitatively worse, but it is evidence that they are definitely making changes to the live service without change notes/communication.
yup, not just you, 10 messages deep and it loses everyhting
Bing Chat’s been getting slammed since Dall-E 3’s release. I feel your pain.
For me it’s distinctly as if it can not remember it’s previous states .
Maybe they reduced how ‘lknf’ it’s long term Emory is?
Yep. I will prompt Dall-E something, and by the third iteration it’s forgotten the main point of the image. I asked for a cat laying in a bread bin. First it did the cat sitting up, I asked for it lying down, then I had to ask for the cat inside the bin, then when I asked for the word bread on the bin it just switched to an illustration of a bread bin. It’s getting really hard to work with.