• lugal@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated

      • EpeeGnome@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 months ago

        Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!

    • xavier666@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”