• Ragdoll X@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    9 months ago

    Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

    […] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

    I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

    Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.

    • irmoz@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Both the astronaut and horse are plagiarised from different sources, it’s definitely “seen” both before

    • Sylvartas@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      9 months ago

      I get your point, but as soon as you ask them to draw something that has been drawn before, all the AI models I fiddled with tend to effectively plagiarize the hell out of their training data unless you jump through hoops to tell them not to