Throughout history many traditions have believed that some fatal flaw in human nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings and threatening to burn the Earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.
Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled The Sorcerer’s Apprentice. Goethe’s poem (later popularised as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, such as fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an axe, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control.
Luckily the only “AI” we have are LLMs which seem to have hit their peak, and probably will start corrupting itself with its own training data now that they’ve scoured the web clean.
LLM’s on their own aren’t much a concern. What is a concern is strapping weapons to one of those Boston Dynamics robots, loading an LLM, and training it to kill.
Governments already kill based on metadata — analyzed by statistical models — so the above isn’t far from reality.
“Turn it on, let us kill our enemies”
immediately starts quoting Shakespeare
I am uncertain why you think an LLM would be well suited to this task - it’s an inappropriate model for that function…
An LLM = machine learning. The language part is largely irrelevant. It finds patterns in 1’s and 0’s, and produces results based on statistical probability. This can be applied to literally anything that can be represented in 1’s and 0’s (e.g. everything in the known universe).
Do you not understand how that could be used to target “terrorists”, or how it could be utilized by a killbot? They can fine tune what metadata = “terrorist”, but (most importantly) false positives are a guaranteed mathematical certainty of statistical models, meaning innocent people are guaranteed to be classified as “terrorist”. Then there’s the more pressing concern of who gets to define what a “terrorist” is.
LLM (Large Language Model) != ML (Machine Learning)
LLM is a subset of ML, but they are not the same
That’s quite frankly dumb as fuck. It doesn’t change anything else I wrote. Do you also go around commenting Biology ≠ Chemistry? Algebra ≠ Math? I am very smart! Give me a break, internet “smart” loser.
OMG you are a fucking top tier wanker.
https://lemm.ee/comment/14428355
I think there’s still a lot of room to grow with LLMs, but nothing will ever be 100% trustworthy. Especially the human brain.
Speek four yurselve. I’m gud.
I am speaking for myself.
Whoosh.
🤷♂️
The human brain has curiosity and asks questions, which is the best way to learn. The LLM has no curiosity and is just fed data, which is the worst way to learn.
The human brain is only as good as the data it has ingested. And I would argue humans are wrong more often than LLMs
Can you provide evidence to that effect? And can you prove that what they get wrong is on the same level of error as LLMs?
Can you provide evidence to the contrary?
I’m just going to ask ChatGPT to answer you, and unless you can come up with some kind of scientific study, you’ll lose. 🤪
That’s not the way it works. It’s not my job to prove your claims are wrong.
https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)
Removed by mod
Removed by mod
Removed by mod
They do this all the time, it’s really gross, take it as a badge of honor.