I am a paying user of ChatGPT because it is invaluable for my job and personal projects in tech. I would NEVER take the output as truth without vetting the information first. If you have a problem and you aren’t familiar with the commands, you can get ChatGPT to give you those commands after describing the problem and environment but you ALWAYS research the commands to make sure that they are doing what the AI describes. The last thing you want to have to explain to an angry client is the reason why their system was nuked is because you got the commands directly from a glorified search engine.
Asking ChatGPT for the commands that do something and then going to a search engine to make sure the supplied commands actually do the right thing sounds a bit like washing the dishes before putting them in the dishwasher to be honest.
You gotta do what you gotta do when you have a shitty dishwasher.
There’s a great dissection of this whole thing by Legal Eagle on YT from last week, if you don’t mind going there.
Thank you, will check it out!/
How stupid can you be? They were literally told ahead of time that the information was incorrect, and they doubled down anyway! Not admitting you’re wrong is an epidemic and seriously needs to be taught more to children if we have any hope for the future.
Idk I’m near pulling my own hair out because my son is naturally always contrary and has to cut me off mid-sentence finishing it with completely wrong or unrelated info. Granted he’s 7, but he should be getting better, not worse!