I’ve only had it really screw up once but it was really interesting. I was asking if to calculate the probability of something and it started calculating but got something basic wrong. In the same response, it apologized for that being wrong and started over, but made the same error. It kept looping through this while I watched. I thought it was kind of freaky it was aware it was fucking up but kept doing the same thing over and over. I’ve never seen it disagree with itself on the same response, either.
I eventually had to hit the button to stop the generation and sent logs.
To be clear, it wasn’t “aware” of anything. I think it’s dangerous to give personhood to and presume consciousness for a language learning model, and perpetuates and pretty insidious lie about what current so-called artificial intelligence means
I’ve only had it really screw up once but it was really interesting. I was asking if to calculate the probability of something and it started calculating but got something basic wrong. In the same response, it apologized for that being wrong and started over, but made the same error. It kept looping through this while I watched. I thought it was kind of freaky it was aware it was fucking up but kept doing the same thing over and over. I’ve never seen it disagree with itself on the same response, either.
I eventually had to hit the button to stop the generation and sent logs.
To be clear, it wasn’t “aware” of anything. I think it’s dangerous to give personhood to and presume consciousness for a language learning model, and perpetuates and pretty insidious lie about what current so-called artificial intelligence means
https://karawynn.substack.com/p/language-is-a-poor-heuristic-for I found this to be a really interesting read on the topic!