• Stefen Auris@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I guess shoving an encyclopedia into it. I’m not sure really, it is a good point. Perhaps AI bias is as inevitable as human bias…

      • interolivary@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Despite what you might assume, an encyclopedia wouldn’t be free from bias. It might not be as biased as, say, getting your training data from a dump of 4chan, but it’d absolutely still have bias. As an on-the-nose example, think about the definition of homosexuality; training on an older encyclopedia would mean the AI now thinks homosexuality is a crime.

        • RickRussell_CA@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          And imagine how badly most encyclopedias would reflect on languages and cultures other than the one that made them.

    • RickRussell_CA@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Well, you can focus on rule-based/expert system style AI, a la WolframAlpha. Actually build algorithms to answer questions that are based on scientific fact and theory, rather than an approximated consensus of many sources of dubious origin.

      • parlaptie@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Ooo, old school AI 😍

        In our current cultural consciousness, I’m not sure that even qualifies as AI anymore. It’s all about neutral networks and machine learning nowadays.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Does there have to be one? It’d be nice if there were, of course, but this is currently the only way we know of to make these AIs.

    • radix@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The alternative is being extremely careful about what data you allow the LLM to learn from. Then it would have your bias, but hopefully that’ll be a less flagrantly racist bias.