You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
Thanks for the link. This was a good read.