• groet@feddit.de
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    10 months ago

    Repeat after me:

    “Current AI is not a knowledge tool. It MUST NOT be used to get information about any topic!”

    If your child is learning Scottish history from AI, you failed as a teacher/parent. This isn’t even about bias, just about what an AI model is. It’s not even supposed to be correct, that’s not what it is for. It is for appearing as correct as the things it has been trained on. And as long as there are two opinions in the training data, the AI will gladly make up a third.

    • GregorGizeh@lemmy.zip
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      10 months ago

      That doesn’t matter though. People will definitely use it to acquire knowledge, they are already doing it now. Which is why it’s so dangerous to let these “moderate” inaccuracies fly.

      You even perfectly summed up why that is: LLMs are made to give a possibly correct answer in the most convincing way.

    • Skull giver@popplesburger.hilciferous.nl
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Fancy autocomplete may be nothing more than fancy autocomplete at the moment, but that hasn’t stopped lawyers, college students, and kids everywhere from taking anything it shits out as factual.

      The “most of what this says is complete nonsense” disclaimers are small (except on OpenAI, but there they take the form of popups I think most users will subconsciously dismiss) and the AI query box is right up there with the actual web search box.

      Most of the world has no idea how AI works, what a training set looks like, and what the implications of prompt temperature are. People do believe everything AI says, and it’s only getting worse from what I can tell.