Researchers found that ChatGPT’s performance varied significantly over time, showing “wild fluctuations” in its ability to solve math problems, answer questions, generate code, and do visual reasoning between March and June 2022. In particular, ChatGPT’s accuracy in solving math problems dropped drastically from over 97% in March to just 2.4% in June for one test. ChatGPT also stopped explaining its reasoning for answers and responses over time, making it less transparent. While ChatGPT became “safer” by avoiding engaging with sensitive questions, researchers note that providing less rationale limits understanding of how the AI works. The study highlights the need to continuously monitor large language models to catch performance drifts over time.
As someone getting an MBA that hates the idea of labor being displaced by AI, if I were an unethical business owner that treated labor as a cost to minimize, I’d use AI to generate content that’s “good enough” and use fewer people to make it exactly to my specification.
And that’s exactly how it will be used
I think that’s what part of the Hollywood writers strike is about. AI generating “good enough” scripts, and studios shelling a few peanuts for some writers to finalize them.
You know, I wouldn’t care about being replaced by a machine, as long as I get UBI. Then I could just do what I like to do and wouldn’t need to care whether I actually make money with it.
That’s not how UBI is supposed to work. You would certainly have enough time to do what you like, just not the resources. Any money you’d get would only cover the absolute necessities like shelter and food.
According to who? Who defines what a “basic necessity” is? It could easily be argued that hobbies are a necessity.
You uh… you might have chosen the wrong field if you hate displacing labour
Or the right one if I want to “be the change I want to see in the world”.