• 0 Posts
  • 25 Comments
Joined 11 months ago
cake
Cake day: August 4th, 2023

help-circle

  • so do some folks use opp as “opponent”? Sure, that’s believable. But I feel fairly confident…

    Bro, it doesn’t even have the right number of P’s for your reasoning to make any sense.

    It comes from “opponent,” that’s why there are two P’s. It comes from video games/chess/card games/etc where you refer to the person or persons you’re playing against as the “opponent”. It’s been happening for many years but has made it’s way into gen z slang.







  • I didn’t say any researcher or anything had named it intelligence. Nor am I trying to be semantically correct.

    Read the guys comments. He’s trying to push the idea that we can “change” it’s “understanding” about the things it’s discussing. He is one of the people who has fallen for the tech bros etc convincing people it is intelligent. I’m not fighting semantics, I’m trying to explain to him that it’s not intelligent. Because he himself clearly doesn’t understand that.


  • I don’t see any reason these kinds of relationships can’t be integrated into generative AI, they just HAVEN’T yet

    No, it’s just fucking pointless. You’re talking about adding sand to a beach. These things are way more complicated and trying to shovel these things in just makes a mess. See literally the OP.

    each time you increase how the relationships interact, you’re also drastically increasing the size and complexity of the algorithm and model.

    No youre not. Not even fucking close. You clearly don’t understand this at all.

    The ALGORITHM will always be the same. Except for new generations of these bots. Claiming adding things like racial bias is going to alter the algorithm is just nonsensical.

    The MODEL is the huge fucking corpus of internet data. Anything you tack onto it is a drop in an ocean. It’s not steering anything.

    Whats changing is they’re editing inputs because that’s all you can really do to shift where these things go. Other changes would turn this into a very different beast, and can’t be done at the fine grained level like “race”.

    Claiming this has any significant impact on the size or complexity of any of this is just total hog wash and you must not understand how these work or how big they are.


  • Ottomateeverything@lemmy.worldtoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    4 months ago

    You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

    The answer is no. This is not a feasible approach. LLMs are just parrots and they don’t understand anything. They were essentially a “shortcut” that gets something that acts intelligent without actually having to build something intelligent. You’re not going to convince it to be intelligent. You’re not going to solve all it’s short comings by shoe horning something in. It’s just more work than building actual intelligence.

    It’s like if a costal town got overrun by flooding from a hurricane. And some guy shows up and is like “hey, I’ve got a bucket, I’ll just pull all the water to the sea”. And I’m like “that’s infeasible, we need a different solution, your bucket even has fucking holes in it”. And you’re over here saying “well, what if we got some duct tape? And then we can patch the holes. And then we can call our friends, and we can all bucket the water”.

    It’s just not happening.

    Eh I really need to learn more about AI to understand the limits

    Yeah. This. You just keep repeating the same approach over and over without understanding or listening to the basic failings of these chat bots. It’s just not happening. You’re just perpetuating nonsense.

    These things are basically slightly more complicated versions of the auto complete in your phone keyboard. Except that they’re fed hug amounts of the internet. They get really good at parroting sentences, but they have no sense of “intelligence” or what they’re actually doing. You’re better off trying to convince your auto correct to sound like Shakespeare than you are to remove the failings like racial bias from things like Gemini and ChatGPT. You can chip at small corners here and there but this is just not the path forward.


  • I don’t know, maybe that would work, for this one particular problem. My point is it’s more than that. Even if you go through the trouble of fixing this one particular issue with LLMs, there are literally thousands of other problems to solve before it’s all “fixed”. At some point, when you’ve built and maintained thousands of workarounds, they start conflicting with each other and making a giant spider web of issues to juggle.

    And so you’re right back at the problem that you were trying to solve by building the LLM in the first place. This approach is just futile and nonsensical.




  • Apparently without any correction there is significant racist bias.

    This doesn’t make it any less ridiculous. This is a central pillar of this kind of AI tech, and they’re trying to shove a band aid over the most obvious example of it. Clearly, that doesn’t work. It’s also only even attempting to fix one of the “problems” - they’re never going to be able to “band aid” every single place where the AI exhibits this problem, so it’s going to leave thousands of others un-fixed. Even if their band aid works, it only continues to mask the shortcomings of this tech and makes it less obvious to people that it’s horrendously inacurrate with the other things it does.

    Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.

    Exactly. This is a core failing of LLM tech. It’s just going to repeat all the shit it was fed to it. You’re never going to fix that. You can attempt to steer it in different directions, but the reason this tech was used was because it is otherwise impossible for us to trudge through all the info that was fed to it. This was the only way to get it to “understand” everything. But all of it’s understandings are going to have these biases, and it’s going to be just as impossible to run through and fix all of these. It’s like you didn’t have enough metal to build the titanic so you just built it out of Swiss cheese and are trying to duct tape one hole closed so it doesn’t sink. It’s just never going to work.

    This being pushed as some artificial INTELLIGENCE is the problem here. This shit doesn’t understand what it’s doing, it’s just regurgitating the things it’s consumed. It’s going to be exactly as flawed as whatever was put into it, and you can’t change that. The internet media it was trained on is racist, biased, full of undeniably false information, and massively swayed by propaganda on all sides of the fence. You can’t expect LLMs to do anything different when trained on that data. They’re going to have all the same problems. Asking these things to give you any information is like asking the average internet user what the answer is. And the average internet user is not very intelligent.

    These are just amped up chat bots with data being sourced from random bits of the internet. Calling them artificial INTELLIGENCE misleads people into thinking these bots are smart of have some sort of understanding of what they’re doing. They don’t. They’re just fucking internet parrots, and they don’t have the architecture to be “fixed” from having these problems. Trying to patch these problems out is a fools errand and only masks their underlying failings.


  • It seemed to me like Reddit basically drove off half their content creators AND Lemmy didnt really steal too much of Reddit’s userbase (around 1%)

    Yeah, this has been my experience as well. Reddit likes to say “all the shitty people left and went to Lemmy”, and they seem half right - a bunch of people did leave. But most of the content of reddit is extremely stale now. And most of the shitty people that left reddit didn’t go to Lemmy, they just fucking left.

    Sure, some Lemmy instances have attracted garbage, but most of the threads here are like 1% of the amount of posts but 80% of the content I care about so it just feels more efficient. Less garbage to trudge through, especially in the comments.





  • Google Assistant is definitely getting worse and worse all the time. When the Google Homes first released they were actually pretty useful and handy. I was willing to pick a few up and they served a good purpose. They ran CIRCLES around Alexa and all those.

    Now many years later, the devices don’t hear questions correctly, have to ask them four different times, they can’t even pick up my wife’s prompt words anymore, don’t even give reasonable answers when they do get the question right… It’s made hundreds of dollars worth of devices infuriating and useless.

    I bought a product that worked. It no longer works because it’s been “updated”.


  • They have stability issues that manifest themselves as bad connections, poor rendering, slow responses, and dropped connections.

    Yeah, you know what’s funny? The built in systems have these problems even worse, which is exactly what drives people to use the phone based alternative instead.

    And when CarPlay and Android Auto have issues, drivers pick up their phones again, taking their eyes off the road and totally defeating the purpose of these phone-mirroring programs.

    Okay, and somehow using the built in ones, which always get cheaper hardware and less development resources, which always perform much much worse, doesn’t cause people to pick up their phones?