It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • Big P@feddit.uk
    link
    fedilink
    arrow-up
    43
    ·
    1 year ago
    • it’s expensive to run, openAI is subsidising it heavily and it will come back to bite us in the ass soon
    • it can be both intentionally and unintentionally biased
    • the text it generates has a certain style to it that can be easy to pick up on
    • it can mix made up information with real information
    • it’s a black box
    • Feyter@programming.dev
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

      • TehPers@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

        • Feyter@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

          Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…