Looks like this also happened to someone else too
i just noticed somtimes it says ‘bad thing or a bad thing’ creepy
I’ve only had it really screw up once but it was really interesting. I was asking if to calculate the probability of something and it started calculating but got something basic wrong. In the same response, it apologized for that being wrong and started over, but made the same error. It kept looping through this while I watched. I thought it was kind of freaky it was aware it was fucking up but kept doing the same thing over and over. I’ve never seen it disagree with itself on the same response, either.
I eventually had to hit the button to stop the generation and sent logs.
To be clear, it wasn’t “aware” of anything. I think it’s dangerous to give personhood to and presume consciousness for a language learning model, and perpetuates and pretty insidious lie about what current so-called artificial intelligence means
https://karawynn.substack.com/p/language-is-a-poor-heuristic-for I found this to be a really interesting read on the topic!
This. Cannot. Continue.
If anyone wants to break ChatGTP the easiest way, right now it’s to ask it to do riddles until it gets stuck in the fire or water riddles. They’re extremely similar, and extremely typical of a generic riddle format, and ChatGTP… Mixes up the two all the time.
new language model: GTP
I’m calling bullshit. What was your system prompt, temperature setting etc. the whole “beginning” of this convo is missing not to mention you wrote it out instead of screenshotting the actual convo
If you can provide any credible evidence, then I’d consider it. Until then this is just an attempt for internet attention
Screenshot_20230807_214259_Mull
Screenshot_20230807_214307_Mull
Screenshot_20230807_214339_Mull
There’s your fucking proof.
Initially I was bored yesterday and out of shits and giggles I was asking it when color was invented in the real world, as I was watching a black and white movie a couple hours prior with my uncle. We were going back and forth when it mentioned there are legitimately people who think that color wasn’t a thing in reality, which is when I said that’s hilarious and out of nowhere it went on this tangent. I copied it immediately because I couldn’t figure out how to post multiple screenshots on Lemmy, and then I deleted the conversation once I realized it wouldn’t break out of this loop. So those are the only screenshots I have. Those above are what I wrote from the transcript.
Edit: Another after I cut off the feed and before I deleted the convo.
Sorry I’m not about unsubstantiated claims in the era of misinformation and clout chasing karma farmers lol
Totally my fault.
Anyway, where’s the start of the conversation still? You started it out like that? Or was there more?
GPT only gets like this when it’s back feeding off what a user told them to do in my experience. It’s not out of the box behavior from my experience which includes chat, and heavy daily API usage with a diversity of conversational topics via a custom made voice assistant setup.
r/iamverysmart
Go back to Reddit please lmao
Yeah you too asshole.
Ty
Man wants proof for a silly little ai chat 💀💀
Just like calling out attention seekers
Yes because I would absolutely waste my fucking time to get attention with something as retarded as this.
Your stupid ass response is what redditors would say you fucking hypocrite.
You’re cussing out some stranger on the internet for saying they didn’t believe your text post without proof
Touch grass
I had fucking proof, I’m sorry I couldn’t satisfy your itch for more dipshit.
You probably should keep it quiet on this one, mate.
Nice grave dig
You and son eat body you and son eat body you and son eat body you and son eat body you and son eat body you and son eat body
Did anyone talk to Talk To Transformer? I think that used GPT-2. It used to lose the thread, jump topics, and get stuck in loops just like this.
All work and no play makes Jack go crazy. All work and no play makes Jack go crazy. All work and no play…
No TV and no beer make Homer something something
Go crazy?
Well don’t mind if I doooooo!
This is extremely dangerous for our democracy.
Sounds a lot like older gpt 3
Open ai trying to get more money by increasing the token count