I don’t remember what the first one was for (it may be something to do with children, but I’m really not sure, this happened about 2 months ago), but the second one may be because I implied the death of a person in a house fire. I think it’s a bit unfair, given that it was a fictional scenario being discussed.
I’m curious what the warnings were for. I’ve never gotten a warning, and I didn’t know there was such a thing.
I don’t remember what the first one was for (it may be something to do with children, but I’m really not sure, this happened about 2 months ago), but the second one may be because I implied the death of a person in a house fire. I think it’s a bit unfair, given that it was a fictional scenario being discussed.
How is the AI supposed to know if the person asking questions has good intentions?
If it provides answers to “hypothetically, how would get away with killing my (fill in blank)?” Then it’s told you how to do it.
Now every criminal can add, “hypothetically” to any criminal question.