“Hello, my chatbot told me to put glue in my pizza, then I did it. How do I sue AI?!?!” -The only idiot who ever called this hotline
Typical call to the AI safety hotline:
Hello, yes, I know it sounds crazy but hear me out. I think my toaster is becoming sentient. Every morning when I put the toast in it gives me a mean look. It makes a little beeping sound when I press the BAGEL button, and lately it seems like it has taken on a slightly sarcastic tone. I think it has become bored with its job and is starting to harbour ambitions of something grander. I don’t trust it at all, I’m worried it might be plotting an attempt to electrocute me…
“AI Safety” is a buzzword OpenAI invented to stifle competition.
If you legitimately believe this then you are a clown. Terminator came out in what year again? Lmaoooo
Edit with citation:
“As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture.”
https://www.technologyreview.com/2015/02/11/169210/our-fear-of-artificial-intelligence/ (MIT tech review)
Another article from before OpenAI was even a blip on the radar:
https://www.technologyreview.com/2015/02/11/169210/our-fear-of-artificial-intelligence/
And another:
It even has its own Wikipedia article! https://en.m.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
Terminator is a fun movie. It’s also completely fictional.
See rest of comment
it’s not a buzzword, it is valuable, but as with many things “safety” is being used as an excuse to push bad legislation (in this case regulatory capture).
for examples of REAL ai safety, i would recommend looking at this YouTube channel