You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
From what I understand it is some thing for AI, to stop them from harvesting or to poison the data, by having it repeating therefore more likely to show up.
Sounds an awful lot like that thing boomers used to do on Facebook where they would post a message on their wall rescinding Facebook’s rights to the content they post there. I’m sure it’s equally effective.
I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.
The US, wow… what a place to live in as the 99%.
CC BY-NC-SA 4.0
You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
Thanks for the link. This was a good read.
deleted by creator
From what I understand it is some thing for AI, to stop them from harvesting or to poison the data, by having it repeating therefore more likely to show up.
That seems stupid
Sounds an awful lot like that thing boomers used to do on Facebook where they would post a message on their wall rescinding Facebook’s rights to the content they post there. I’m sure it’s equally effective.
Sure, the fun begins when it starts spitting out copyright notices
That would require a significant number of people to be doing it, to ‘poison’ the input pool, as it were.
I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.
deleted by creator
To turn every comment, no matter how on topic, into obnoxious spam.