Three raccoons in a trench coat. I talk politics and furries.

https://www.youtube.com/@ragdoll_x

https://ragdollx.substack.com

https://twitter.com/x_ragdoll

  • 21 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle



  • Ȉ̶̢̠̳͉̹̫͎̻͔̫̈́͊̑͐̃̄̓̊͘ ̶̨͈̟̤͈̫̖̪̋̾̓̀̓͊̀̈̓̀̕̚̕͘͝Ạ̶̢̻͉̙̤̫̖̦̼̜̙̳̐́̍̉́͒̓̀̆̎̔͋̏̕͝͝M̶̛̛͇̔̀̈̄̀́̃̅̆̈́͑̑͆̇ ̵̢̨͈̭͇̙̲͎͉̝͙̻̌͝I̷̡͓͖̙̩̟̫̝̼̝̪̟̔͑͒͊͑̈́̀̿̋͂̓̋̔͌̚ͅN̸̮̞̟̰̣͙̦̲̥̠͑̔̎͑̇͜͝ ̷̢̛̛͍̞̖̹̮͈͕̠̟̽̔̋̎͋͑̍̿̅̈́̋̕̚̚͜͝Y̴̧̨̨͙̗̩̻̹̦̻͎͇͈͎͓̩̐̓Ö̸͈̭̒̌̀̇͂̃͠ͅŨ̷̢̞̗͛̌͌͒̀̇́̽̓͑͝Ŕ̷͇͌ ̸̛̮̋̏̋̋̔͝W̶͔̄̐͋͑A̷̧̖̗͕̻̳͙̼͖͒L̴̩̰͙̾͑͑͑̒̏Ḻ̸̡̦̭͚̱̝̟̣̤͗̊́͐̋̈́̒͠͠͠͠͝S̸̯͚͈̠͍̆̉̑͗͊̄̒̏͆̔͊
















  • Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

    […] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

    I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

    Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.