I think AI is neat.

  • KeenFlame@feddit.nu
    link
    fedilink
    arrow-up
    24
    arrow-down
    5
    ·
    5 months ago

    Been destroyed for this opinion here. Not many practicioners here just laymen and mostly techbros in this field… But maybe I haven’t found the right node?

    I’m into local diffusion models and open source llms only, not into the megacorp stuff

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      edit-2
      5 months ago

      If anything people really need to start experimenting beyond talking to it like its human or in a few years we will end up with a huge ai-illiterate population.

      I’ve had someone fight me stubbornly talking about local llms as “a overhyped downloadable chatbot app” and saying the people on fossai are just a bunch of ai worshipping fools.

      I was like tell me you now absolutely nothing you are talking about by pretending to know everything.

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        5 months ago

        But the thing is it’s really fun and exciting to work with, the open source community is extremely nice and helpful, one of the most non toxic fields I have dabbled in! It’s very fun to test parameters tools and write code chains to try different stuff and it’s come a long way, it’s rewarding too because you get really fun responses

        • Fudoshin ️🏳️‍🌈@feddit.uk
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          Aren’t the open source LLMs still censored though? I read someone make an off-hand comment that one of the big ones (OLLAMA or something?) was censored past version 1 so you couldn’t ask it to tell you how to make meth?

          I don’t wanna make meth but if OSS LLMs are being censored already it makes having a local one pretty fucking pointless, no? You may as well just use ChatGPT. Pray tell me your thoughts?

          • Kittenstix@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            Could be legal issues, if an llm tells you how to make meth but gets a step or two wrong and results in your death, might be a case for the family to sue.

            But i also don’t know what all you mean when you say censorship.

            • Fudoshin ️🏳️‍🌈@feddit.uk
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              5 months ago

              But i also don’t know what all you mean when you say censorship.

              It was literally just that. The commentor I saw said something like "it’s censored after ver 1 so don’t expect it to tell you how to cook meth.

              But when I hear the word “censored” I think of all the stuff ChatGPT refuses to talk about. It won’t write jokes about protected groups and VAST swathes of stuff around it. Like asking it to define “fag-got” can make it cough and refuse even though it’s a British food-stuff.

              Blocking anything sexual - so no romantic/erotica novel writing.

              The latest complaint about ChatGPT is it’s laziness which I can’t help feeling is due to over-zealous censorship. Censorship doesn’t just block the specific things but entirely innocent things (see fag-got above).

              Want help writing a book about Hilter beoing seduced by a Jewish woman and BDSM scenes? No chance. No talking about Hitler, sex, Jewish people or BDSM. That’s censorship.

              I’m using these as examples - I’ve no real interest in these but I am affected by annoyances and having to reword requests because they’ve been mis-interpreted as touching on censored subjects.

              Just take a look at r/ChatGPT and you’ll see endless posts by people complaining they triggered it’s censorship over asinine prompts.

              • Kittenstix@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                5 months ago

                Oh ok, then yea that’s a problem, any censorship that’s not directly related to liability issues should be nipped in the bud.

          • webghost0101@sopuli.xyz
            link
            fedilink
            arrow-up
            5
            ·
            5 months ago

            Depends who and how the model was made. Llama is a meta product and its genuinely really powerful (i wonder where zuckerberg gets all the data for it)

            Because its powerful you see many people use it as a starting point to develop their own ai ideas and systems. But its not the only decent open source model and the innovation that work for one model often work for all others so it doesn’t matter in the end.

            Every single model used now will be completely outdated and forgotten in a year or 2. Even gpt4 en geminni

    • Redacted@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      5 months ago

      Have you ever considered you might be, you know, wrong?

      No sorry you’re definitely 100% correct. You hold a well-reasoned, evidenced scientific opinion, you just haven’t found the right node yet.

      Perhaps a mental gymnastics node would suit sir better? One without all us laymen and tech bros clogging up the place.

      Or you could create your own instance populated by AIs where you can debate them about the origins of consciousness until androids dream of electric sheep?

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        Do you even understand my viewpoint?

        Why only personal attacks and nothing else?

        You obviously have hate issues, which is exactly why I have a problem with techbros explaining why llms suck.

        They haven’t researched them or understood how they work.

        It’s a fucking incredibly fast developing new science.

        Nobody understands how it works.

        It’s so silly to pretend to know how bad it works when people working with them daily discover new ways the technology surprises us. Idiotic to be pessimistic about such a field.

        • Redacted@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          4 months ago

          You obviously have hate issues

          Says the person who starts chucking out insults the second they get downvoted.

          From what I gather, anyone that disagrees with you is a tech bro with issues, which is quite pathetic to the point that it barely warrants a response but here goes…

          I think I understand your viewpoint. You like playing around with AI models and have bought into the hype so much that you’ve completely failed to consider their limitations.

          People do understand how they work; it’s clever mathematics. The tech is amazing and will no doubt bring numerous positive applications for humanity, but there’s no need to go around making outlandish claims like they understand or reason in the same way living beings do.

          You consider intelligence to be nothing more than parroting which is, quite frankly, dangerous thinking and says a lot about your reductionist worldview.

          You may redefine the word “understanding” and attribute it to an algorithm if you wish, but myself and others are allowed to disagree. No rigorous evidence currently exists that we can replicate any aspect of consciousness using a neural network alone.

          You say pessimistic, I say realistic.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Haha it’s pure nonsense. Just do a little digging instead of doing the exact guesstimation I am talking about. You obviously don’t understand the field

            • Redacted@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              Once again not offering any sort of valid retort, just claiming anyone that disagrees with you doesn’t understand the field.

              I suggest you take a cursory look at how to argue in good faith, learn some maths and maybe look into how neural networks are developed. Then study some neuroscience and how much we comprehend the brain and maybe then we can resume the discussion.

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                4 months ago

                You attack my viewpoint, but misunderstood it. I corrected you. Now you tell me I am wrong with my viewpoint (I am not btw) and start going down the idiotic path of bad faith conversation, while strawman arguing your own bad faith accusation, only because you are butthurt that you didn’t understand. Childish approach.

                You don’t understand, because no expert currently understands these things completely. It’s pure nonsense defecation coming out of your mouth

                • Redacted@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  You don’t really have one lol. You’ve read too many pop-sci articles from AI proponents and haven’t understood any of the underlying tech.

                  All your retorts boil down to copying my arguments because you seem to be incapable of original thought. Therefore it’s not surprising you believe neural networks are approaching sentience and consider imitation to be the same as intelligence.

                  You seem to think there’s something mystical about neural networks but there is not, just layers of complexity that are difficult for humans to unpick.

                  You argue like a religious zealot or Trump supporter because at this point it seems you don’t understand basic logic or how the scientific method works.

    • LainTrain@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      5 months ago

      All the stuff on the dbzero instance is pro open source and pro piracy so fairly anti corpo and not tech illiterate

  • Starkstruck@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    5 months ago

    I feel like our current “AIs” are like the Virtual Intelligences in Mass Effect. They can perform some tasks and hold a conversation, but they aren’t actually “aware”. We’re still far off from a true AI like the Geth or EDI.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      I wish we called them VI’s. It was a good distinction in their ability.

      Though honestly I think our AI is more advanced in conversation than a VI in ME.

      • banneryear1868@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        5 months ago

        “AI” is always reserved for the latest tech in this space, the previous gens are called what they are. LMMs will be what these are called after a new iteration is out.

    • Nom Nom@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      This was the first thing that came to my mind as well and VI is such an apt term too. But since we live in the shittiest timeline Electronic Arts would probably have taken the Blizzard/Nintendo route too and patented the term.

  • Honytawk@lemmy.zip
    link
    fedilink
    arrow-up
    3
    arrow-down
    10
    ·
    edit-2
    5 months ago

    For General AI to work, we first need the computer to be able to communicate properly with humans, to understand them and to convey themselves in an understandable way.

    LLM is just that. It is the first step towards General AI.

    it is already a great tool for programmers. Which means programming anything, including new AI, will only go exponentially faster.

    • OhNoMoreLemmy@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Why is this the first step and not any of the other things that have been around for years?

      We have logic reasoning in the form of prolog, bots that are fun to play against in computer games, computers that can win in chess and go against the best players in the world, and computer vision is starting to be useful.

    • Holzkohlen@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      it is already a great tool for programmers. Which means programming anything, including new AI, will only go exponentially faster.

      Yes to it being a tool. But right now all it can really do is bog standard stuff. Also have you read about that the use of Github Copilot seems to reduce the quality of code? This means we cannot yet rely on this type of technology. Again, it’s a limited tool and that is it. At least for now.

  • H4rdStyl3z@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    How do you know for sure your brain is not doing exactly the same thing? Hell, being autistic, many social interactions are just me trying to guess what will get me approval without any understanding lol.

    Also really fitting that Photon chose this for a placeholder right now:

  • AwkwardLookMonkeyPuppet@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    8
    ·
    5 months ago

    I think AI is the single most powerful tool we’ve ever invented and it is now and will continue completely changing the world. But you’ll get nothing but hate and “iTs Not aCtuaLly AI” replies here on Lemmy.

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      13
      ·
      5 months ago

      Umm penicillin? anaesthetic? the Haber process? the transistor? the microscope? steel?

      I get it, the models are new and a bit exciting but GPT wont make it so you can survive surgery, or make rocks take the jobs of computers.

      • GeneralVincent@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        5 months ago

        Very true and valid. Tho, devils advocate for a moment, AI is great at discovering new ways to survive surgery and other cool stuff. Of course it uses the existing scientific discoveries to do that, but still. It could be the tool to find the next biggest thing on the penicillin, anaesthesia, haber process, transistor, microscope, steel list which is pretty cool.

        • naevaTheRat@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          5 months ago

          Is it? This seems like a big citation needed moment.

          Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven’t seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

          We’ve had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I’m not sure why LLMs would be? I actually worry they’ll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

          Why do you think LLMs will revolutionise science?

            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              5 months ago

              This seems like splitting hairs agi doesn’t exist so that can’t be what they mean. AI applies to everything from pathing algorithms for library robots to computer vision and none of those seem to apply.

              The context of this post is LLMs and their applications

              • A_Very_Big_Fan@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                The comment you replied to first said “AI”, not “LLMs”. And he even told you himself that he didn’t mean LLMs.

                I’m not saying he’s right, though, because afaik AI hasn’t made any noteworthy progress made in medical science. (Although a quick skim through Google suggests there has been). I’m just saying that’s clearly not what he said.

                • naevaTheRat@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 months ago

                  I thought they was saying they didn’t mean llms will aid science not that llms wasn’t the topic. Ambiguous in reread.

                  AI isn’t well defined which is what I was highlighting with mentions of computer vision etc, that falls into AI and it isn’t really meaningfully different from other diagnostic tools. If people mean agi then they should say that, but it hasn’t even been established it’s likely possible let alone that we’re close.

                  There are already many other intelligences on the planet and not many are very useful outside of niches. Even if we make a general intelligence it’s entirely possible we won’t be able to surpass fish level let alone human for example. and even then it’s not clear that intelligence is the primary barrier in anything, which was what I was trying to point out in my science held back post.

                  There are so many ifs AGI is a Venus is cloudy -> dinosaurs discussion, you can project anything you like on it but it’s all just fantasy.

          • GeneralVincent@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 months ago

            why do you think LLMs will revolutionise science

            Idk it probably won’t. That wasn’t exactly what I was saying, but I’m also not an expert in any scientific field so that’s my bad for unintentionally contributing to the hype by implying AI is more capable than it currently is or has the potential to be

            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              3
              ·
              5 months ago

              Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

              • We’ve just got all the low hangingfruit
              • Science education isn’t available to many people, perspectives are quite limited consequently.
              • power structures are exploitative and ossified, driving away many people
              • industry has too much influence, there isn’t much appetite to fund blue sky projects without obvious short term money earning applications
              • patents slow progress
              • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
              • nobody wants to pay scientists, bright people end up elsewhere
  • antidote101@lemmy.world
    link
    fedilink
    arrow-up
    66
    arrow-down
    5
    ·
    5 months ago

    I think LLMs are neat, and Teslas are neat, and HHO generators are neat, and aliens are neat…

    …but none of them live up to all of the claims made about them.

    • ALostInquirer@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      HHO generators

      …What are these? Something to do with hydrogen? Despite it not making sense for you to write it that way if you meant H2O, I really enjoy the silly idea of a water generator (as in, making water, not running off water).

      • antidote101@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        5 months ago

        HHO generators are a car mod that some backyard scientists got into, but didn’t actually work. They involve cracking hydrogen from water, and making explosive gasses some claimed could make your car run faster. There’s lots of YouTube videos of people playing around with them. Kinda dangerous seeming… Still neat.

  • Saledovil@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    I once ran an LLM locally using Kobold AI. Said thing has an option to show the alternative tokens for each token it puts out, and what their probably for being chosen was. Seeing this shattered the illusion that these things are really intelligent for me. There’s at least one more thing we need to figure out before we can build an AI that is actually intelligent.

    It’s cool what statistics can do, though.

    • AlolanYoda@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      That’s actually pretty neat. I tried Kobold AI a few months ago but the novelty wore off quickly. You made me curious, I’m going to check out that option once I get home. Is it just a toggleable opyiont option or do you have to mess with some hidden settings?

  • ComradeChairmanKGB@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    Alternatively we could call things what they are. You know, cause if we ever have actual AI we kind of need the term to be intact and not watered down by years of marketing bullshit or whatever else.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      There are specific terms for what you’re talking about already. AI is all the ML algorithms that we are integrating into daily life, and AGI is human-level AI able to create it’s own subjective experience.

    • Exocrinous@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      5 months ago

      Alexa is AI. She’s artificially intelligent. Moreso than an ant or a pigeon, and I’d call those animals pretty smart.

    • Poik@pawb.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      6
      ·
      5 months ago

      … Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.

      The term AI is a lot more broad than you think.

      • BellyPurpledGerbil@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        5 months ago

        The term AI being used by corporations isn’t some protected and explicit categorization. Any software company alive today, selling what they call AI, isn’t being honest about it. It’s a marketing gimmick. The same shit we fall for all the time. “Grass fed” meat products aren’t actually 100% grass fed at all. “Healthy: Fat Free!” foods just replace the fat with sugar and/or corn syrup. Women’s dress sizes are universally inconsistent across all clothing brands in existence.

        If you trust a corporation to tell you that their product is exactly what they market it as, you’re only gullible. It’s forgivable. But calling something AI when it’s clearly not, as if the term is so broad it can apply to any old if-else chain of logic, is proof that their marketing worked exactly as intended.

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.

          Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          4
          ·
          5 months ago

          I still don’t follow your logic. You say that GPT has no ability to problem solve, yet it clearly has the ability to solve problems? Of course it isn’t infallible, but neither is anything else with the ability to solve problems. Can you explain what you mean here in a little more detail.

          One of the most difficult problems that AI attempts to solve in the Alexa pipeline is, “What is the desired intent of the received command?” To give an example of the purpose of this question, as well as how Alexa may fail to answer it correctly: I have a smart bulb in a fixture, and I gave it a human name. When I say,” “Alexa, make Mr. Smith white,” one of two things will happen, depending on the current context (probably including previous commands, tone, etc.):

          1. It will change the color of the smart bulb to white
          2. It will refuse to answer, assuming that I’m asking it to make a person named Josh… white.

          It’s an amusing situation, but also a necessary one: there will always exist contexts in which always selecting one response over the other would be incorrect.

          • ☭ SaltyIceteaMaker ☭@iusearchlinux.fyi
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            5 months ago

            See that’s hard to define. What i mean is things like reasoning and understanding. Let’s take your example as an… Example. Obviously you can’t turn a person white so they probably mean the led. Now you could ask if they meant the led but it’s not critical so let’s just do it and the person will complain if it’s wrong. Thing is yes you can train an ai to act like this but in the end it doesn’t understand what it’s doing, only (maybe) if it did it right ir wrong. Like chat gpt doesn’t understand what it’s saying. It cannot grasp concepts, it can only try to emulate understanding although it doesn’t know how or even what understanding is. In the end it’s just a question of the complexity of the algorithm (cause we are just algorithms too) and i wouldn’t consider current “AI” to be complex enough to be called intelligent

            (Sorry if this a bit on the low quality side in terms of readibility and grammer but this was hastily written under a bit of time pressure)

            • QuaternionsRock@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              2
              ·
              edit-2
              5 months ago

              Obviously you can’t turn a person white so they probably mean the led.

              This is true, but it still has to distinguish between facetious remarks and genuine commands. If you say, “Alexa, go fuck yourself,” it needs to be able to discern that it should not attempt to act on the input.

              Intelligence is a spectrum, not a binary classification. It is roughly proportional to the complexity of the task and the accuracy with which the solution completes the task correctly. It is difficult to quantify these metrics with respect to the task of useful language generation, but at the very least we can say that the complexity is remarkable. It also feels prudent to point out that humans do not know why they do what they do unless they consciously decide to record their decision-making process and act according to the result. In other words, when given the prompt “solve x^2-1=0 for x”, I can instinctively answer “x = {+1, -1}”, but I cannot tell you why I answered this way, as I did not use the quadratic formula in my head. Any attempt to explain my decision process later would be no more than an educated guess, susceptible to similar false justifications and hallucinations that GPT experiences. I haven’t watched it yet, but I think this video may explain what I mean.

              Edit: this is the video I was thinking of, from CGP Grey.

              • ☭ SaltyIceteaMaker ☭@iusearchlinux.fyi
                link
                fedilink
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                5 months ago

                Hmm it seems like we have different perspectives. For example i cannot do something i don’t understand, meaning if i do a calculation in my head i can tell you exactly how i got there because i have to think through every step of the process. This starts at something as simple as 9 + 3 wher i have to actively think aboit the calculation, it goes like this in my head: 9 + 3… Take 1 from 3 add it to 9 = 10 + 2 = 12. This also applies to more complex things wich on one hand means i am regularly slower than my peers but i understand more stuff than them.

                So i think because of our different… Thinking (?) We both lack a critical part in understanding each other’s view point

                Anyhow back to ai.

                Intelligence is a spectrum, not a binary classification

                Yeah that’s the problem where does the spectrum start… Like i wouldn’t call a virus, bacteria or single cell intelligent, yet somehow a bunch of them is arguing about what intelligence is. i think this is just case of how you define intelligence, wich would vary from person to person. Also, I agree that llms are unfathomably complex. However i wouldn’t calssify them as intelligent, yet. In any case it was an interesting and fun conversation to have but i will end it here and go to sleep. Thanks for having an actual formal disagreement and not just immediately going for insults. Have a great day/night

                • Poik@pawb.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  5 months ago

                  And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.

                  LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

                  I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

                  This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.

      • Holzkohlen@feddit.de
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        5 months ago

        The term AI is a lot more broad than you think.

        That is precisely what I dislike. It’s kinda like calling those crappy scooter thingies “hoverboards”. It’s just a marketing term. I simply oppose the use of “AI” for the weak kinds of AI we have right now and I’d prefer “AI” to only refer to strong AI. Though that is of course not within my power to force upon people and most people seem to not care one bit, so eh 🤷🏼‍♂️

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          The term AI is older than the idea of machine learning. AI is a rectangle where machine learning is a square. And deep learning is a unit square.

          Please, don’t muddy the waters. That’s what caused the AI winter of 1960. But do go after the liars. I’m all for that.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      5 months ago

      Nobody is claiming there is problem solving in LLMs, and you don’t need problem solving skills to be artificially intelligent. The same way a knife doesn’t have to be a Swiss army knife to be called a “knife.”

      • Cringe2793@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        I mean, people generally don’t have problem solving skills, yet we call them “intelligent” and “sentient” so…

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I just realized I interpreted your comment backwards the first time lol. When I wrote that I had “people don’t have issues with problem solving” in my head

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          There’s a lot more to intelligence and sentience than just problem solving. One of them is recalling data and effectively communicating it.

        • Obi@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          Dude you gave me a heart attack, I was like NO WAY that came out in 2004. It didn’t, it was 2014, which is still like probably twice as old as I would’ve guessed but not as bad.

          And yes it is a fantastic movie, go watch it if you haven’t seen it.

          • Landless2029@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Hahaha I’ll fix it

            Unfortunately they made it seem like a horror movie in the trialers when it’s more a dramatic thriller. Interesting watch and even holds up well today TEN YEARS LATER.

    • Kedly@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Next you’ll tell me that the enemies that I face in video games arent real AI either!

    • Gabu@lemmy.ml
      link
      fedilink
      arrow-up
      12
      arrow-down
      4
      ·
      5 months ago

      That was never the goal… You might as well say that a bowling ball will never be effectively used to play golf.

        • Cringe2793@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          They did, I think. It’s most of the general public that don’t know. CEOs just take advantage of this to sell shit.

      • Jack@slrpnk.net
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        5 months ago

        I agree, but it’s so annoying when you work as IT and your non-IT boss thinks AI is the solution to every problem.

        At my previous work I had to explain to my boss at least once a month why we can’t have AI diagnosing patients (at a dental clinic) or reading scans or proposing dental plans… It was maddening.

        • Daefsdeda@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          5 months ago

          I find that these LLMs are great tools for a professional. So no, you still need the professional but it is handy if an ai would say, please check these places. A tool, not a replacemenrt.

  • BaumGeist@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    5 months ago

    I’ve know people in my life who put less mental effort into living their lives than LLMs put into crafting the most convincing lies you’ve ever read