This is a reference to people finding AI chatbots loopholes to get it to say stuff they’re not allowed to say, like the recipe for napalm. It would tell you if you ask it to pretend they’re a relative.
Its a reference to how people have been tricking these “AI” models like ChatGPT to do stuff it wouldn’t do when asked straight-forward by making silly scenarios like the one in the meme. And HAL is the name of the AI in 2001: A Space Odyssey.
I don’t get it.
This is a reference to people finding AI chatbots loopholes to get it to say stuff they’re not allowed to say, like the recipe for napalm. It would tell you if you ask it to pretend they’re a relative.
https://www.polygon.com/23690187/discord-ai-chatbot-clyde-grandma-exploit-chatgpt
Its a reference to how people have been tricking these “AI” models like ChatGPT to do stuff it wouldn’t do when asked straight-forward by making silly scenarios like the one in the meme. And HAL is the name of the AI in 2001: A Space Odyssey.
https://learnprompting.org/docs/prompt_hacking/injection