There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
You could have said the same for factories in the 18th century. But instead of the reactionary sentiment to just reject the new, we should be pushing for ways to have it work for everyone.
Everyone who died as a result of their introduction probably would say the same, yes. If corpses could speak, anyway.
Well if you can find anyone who’s died because an AI wrote an article then I’ll concede you have a point.
Did you read the whole article including the “flame bait”? The author gives an example there of someone committing suicide because an AI encouraged them…
Is that the AI’s fault, or the depressed and suicidal human’s fault?
Do you not think that the person would have committed suicide whether they asked the AI or not? The AI might have sped up the decision, but it is the human who made it.
It is not like the AI is out there trying to convince non-depressed humans to become depressed in order they go kill themselves …
Well, in the linked article the wife of this person said that they wouldn’t have committed suicide without the AI facilitating it. So yes, I would say it is at least in part the AI’s fault. And no, I didn’t say it was the intention of the AI to do so. But that doesn’t mean it won’t do it at all.
You seem to really wanna push AI and lose your empathy over this…
If the technology actually existed to replace human workers, the human workers could chip in and buy the means of production and replace the company owners as well.
deleted by creator
If the humans are replaced, how will they afford to buy what the company owners aren’t selling?
I don’t see how rejecting 18th century-style factories or exploitative neural networks is a bad thing. We should have the option of saying “no” to the ideas of capitalists looking for a quick buck. There was an insightful blog post that I can’t find right now…
Lets not forget all the exploitation that happened in that period also. People, even children, working for endless hours for nearly no pay, losing limbs to machinery and simply getting discarded for it. Just as there is a history of technology, there is a history of it being used inequitably and even sociopathically, through greed that has no consideration for human well-being. It took a lot of fighting, often literally, to get to the point we have some dignity, and even that is being eroded.
I get your point, it’s not the tech, it’s the system, and while I lost all excitement for AI I don’t think that genie can’t be put back in the bottle. But if the whole system isn’t changing, we should at least regulate the tech.
But AI will eliminate so many jobs that it will affect a lot of people, and strain the whole system even more. There isn’t a “just become a programmer” solution to AI, because even intellectually-oriented jobs are now on the line for elimination. This won’t create more jobs than it takes away.
Which shows why people are so fearful of this tech. Freeing people from manual labor to go to intellectual work was overall good, though in retrospect even then it came at a cost of passionate artisans. But now people might be “freed” from being artists to having to become sweatshop workers, who can’t outperform machines so their only option is to undercut them. Who is being helped by this?
Yes, I know about the exploitation that happened during early industrialization, and it was horrible. But if people had just rejected and banned factories back then, we’d still be living in feudalism.
I know that I don’t want to work a job that can be easily automated, but intentionally isn’t just so I can “have a purpose”.
What will happen if AI were to automate all jobs? In the most extreme case, where literally everyone lost their job, then nobody would be able to buy stuff, but also, no company would be able to sell products and make profit. Then, either capitalism would collapse - or more likely, it will adapt by implementing some mechanism such as UBI. Of course, the real effect of AI will not be quite that extreme, but it may well destabilize things.
That said, if you want to change the system, it’s exactly in periods of instability that can be done. So I’m not going to try to stop progress and cling to the status quo out of fear what those changes might be - and instead join a movement that tries to shape them.
Maybe. But generally on Lemmy I see sooo many articles about “Oh, no, AI bad”. But no good suggestions on what exactly regulations should we want.
Movements that shape changes can also happen by resisting or by popular pressure. There is no lack of well-reasoned articles about the issues with AI and how they should be addressed, or even how they should have been addressed before AI engineers charged ahead not even asking for forgiveness after also not asking for permission. The thing is that AI proponents and the companies embracing them don’t care to listen, and governments are infamously slow to act.
For all that is said of “progress”, a word with a misleading connotation, once again this technology puts wealthy people, who can build data centers for it, at an advantage compared to regular people who at best can only use lesser versions of it, if even that, they might instead just receive the end result of whatever the technology owners want to offer. Like the article itself mentions, it has immense potential for advertising, scams and political propaganda. I haven’t seen AI proponents offering meaningful rebuttals to that.
At this point I’m bracing for the dystopian horrors that will come before it all comes to a head, and who knows how it might turn out this time around.
You won’t get a direct rebuttal because, obviously, an AI can be used to write ads, scams and political propaganda.
But every day millions of people are cut by knives. It hurts. A lot. Sometimes the injuries are fatal. Does that mean knives are evil and ruining the world? I’d argue not. I love my kitchen knives and couldn’t imagine doing without them.
I’d also argue LLMs can be used fact check and uncover scams/political propaganda/etc and can lower the cost of content production to the point where you don’t need awful advertisements to cover the production costs.
This knife argument is overused as an excuse to take no precautions about anything whatsoever. The tech industry could stand to be more responsible about what it makes rather than shrugging it off until aging politicians realize this needs to be adressed.
Using LLMs to fact check is a flawed proposition, because ultimately what it provides are language patterns, not verified information. Nevermind its many examples of mistakes, it’s very easy for them to provide incorrect answers that are widely repeated misconceptions. You may not blame the LLM for that, you can scratch that to generalized ignorance, but it still ends up falling short for this use case.
But as much as I dislike ads, that last one is part of the problem. Humans losing their livelihood. So, going back to a previous point, how does the lowered ad budget help anyone but executives and investors? The former ad workers get freed to do what? Because the ones focused on art or writing would only have a harder time making a career out of that now.
Accelerationists?