I don’t know. I read the content policy, and it’s quite vague. Basically, trying to make CSAM, trying to violate OpenAI safety features, trying to pass off ChatGPT responses as valid financial, legal or medical advice, or things like that.
I once asked chatGPT for a simple Python script to organize files in a directory and received a warning. Still don’t know what that was about, but I guess their detection is not perfect.
What one should do to get a warning?
I don’t know. I read the content policy, and it’s quite vague. Basically, trying to make CSAM, trying to violate OpenAI safety features, trying to pass off ChatGPT responses as valid financial, legal or medical advice, or things like that.
All that sounds reasonable, but hardly detectable (it is tricky to find that you try to pass ChatGPT responses as a valid legal advice).
Do you have an idea what you actions could potentially causes the warnings?
Do you have an app which is based on OpenAI API? or use its web chat for yourself?
I once asked chatGPT for a simple Python script to organize files in a directory and received a warning. Still don’t know what that was about, but I guess their detection is not perfect.
wow…