• PenguinTD@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    I didn’t watch, someone please give a TLDW. Cause I highly suspect this is gonna just be wasting time.

    • Melody Fwygon@lemmy.one
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      TL;DR: It suggests several methods and makes a few mistakes which he had to point out to which it suggests even more absurd solutions to.

      The AI recommends doing things in long and hard ways and does not conceive of new or novel technologies; it just mashes together existing ones despite their implementation being difficult or impossible by simply waving away these issues by saying things like “Much research and development would be needed but…”

      • PenguinTD@lemmy.ca
        link
        fedilink
        English
        arrow-up
        29
        ·
        1 year ago

        so similar to say, a redditor trying to sound smart by googling and debating another while both has no qualification on that topic, got it.

        • Laneus@beehaw.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          I wonder how much of that is just an inherent part of how neural networks behave, or if LLMs only do it because they learned it from humans.

          • Kata1yst@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            More the latter. Neural networks have been used in biomed for about a decade now fairly successfully. Look into their use of genetic algorithms, where we are effectively using the power of evolution to discover new therapies, in many cases even new uses for existing (approved) drugs.

            But ChatGPT has no way to test or improve any “designs”, it simply uses existing indexed data to infer what you want to hear as best it can. The goal is to sound smart, not be smart.

        • Hexorg@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          That’s actually a decently good analogy, though a random redditor is still smarter than ChatGPT because they can actually analyze google results, not just match situations and put them together.

  • Jordan Lund@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    tl;dr - If AI doesn’t directly try to kill us, it may try to trick us into killing ourselves by building it’s ideas.

    • manitcor@lemmy.intai.techOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      aside from misinfo this is more of why they want to moderate some responses, someone is going to blow themselves up using a recipe it gives them

  • don@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Coulda skipped right past nuclear and told it to design a fusion reactor, but since it can’t get the nuclear reactor right, may as well nevermind.