• pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    Yeah in fact you’re giving the llm additional data to train on what poisoned data looks like so it can avoid it better, as they can clear see the before vs after

    • InternetPerson@lemmings.world
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      It is necessary to employ a method which enables the training procedure to distinguish copyrighted material. In the “dumbest” case, some humans will have to label it.

      Just because you’ve edited a comment, doesn’t mean that this can be seen as “oh, this is under copyright now”.

      I don’t say it’s technical impossible. To the contrary, it very much is possible. It’s just more work. This drives the development costs up and can give some form of satisfaction to angered ex-reddit users like me. However, those costs will be peanuts for giants like Google / Alphabet.