AI could probably fly under the radar if they just didn’t do stupid stuff like this, but they just have to push the boundaries. If they made any number of fake voices it’d be fine, but no, had to do a celebrity. I hope they lose. Stupid stupid stupid marketing department.
Actually, I think this is a legally very interesting area.
At the end of the day, AIs are just fancy imitations. Nobody would sue someone for imitating a voice, as long as it’s not impersonation (in the legal sense).
I think you misunderstand something. The same thing many AI enthusiasts and critics often choose to not understand. Regenerative AIs aren‘t just born from plain code and they don’t just imitate. They use a ton of data as reference points. It’s literally in the name of the technology.
You could claim „well maybe they used different voices and mixed them together“ but that is highly unlikely, given how much of a wild west approach most regenerative AI services have. it‘s more likely they used protected property here in a way it was not intended to be used. In which case SJ does indeed have a legal case here.
They use a ton of data as reference points. It’s literally in the name of the technology.
Reference is the wrong word.
They learn the patterns that exist in data and are able to predict future patterns.
They don’t actually reference the source material during generation (barring over itting which can happen and is roughly akin to a human memorizing something and reproducing it).
Weather or not the copyrighted data shows up in the final model is utterly irrelevant though. It is illegal to use copyrighted material period outside of fair use, and this is most certainly not. This is civil law, not criminal, the standard is more likely than not rather than beyond a reasonable doubt. If a company cannot provide reasonable evidence that they created the model entirely with material they own the rights to use for that purpose, than it is a violation of the law.
Math isn’t a person, doesn’t learn in anything approaching the same method beyond some unrelated terminology, and has none of the legal rights that we afford to people. If it did, than this would be by definition a kidnapping and child abuse case not a copyright case.
It is illegal to use copyrighted material period outside of fair use, and this is most certainly not.
Yeah it is. Even assuming fair use applied, fair use is largely a question of how much a work is transformed and (a billion images) -> AI model is just about the most transformative use case out there.
And this assumes this matters when they’re literally not copying the original work (barring over fitting). It’s a public internet download. The “copy” is made by Facebook or whoever you uploaded the image to.
The model doesn’t contain the original artwork or parts of it. Stable diffusion literally has one byte per image of training data.
The number of bytes per image doesn’t necessarily mean there’s no copying of the original data. There are examples of some images being “compressed” (lossily) by Stable Diffusion; in that case the images were specifically sought out, but I think it does show that overfitting is an issue, even if the model is small enough to ensure it doesn’t overfit for every image.
It’s a hard one. You train a general AI and ask for a story idea, that’s not a huge deal IMO. You ask it to write in the style of George RR Martin or something that’s something different. Yes you can do it by hand too, but these tools make it easier than ever.
Then sub questions… Is it okay to do it for free? What if you distribute it? What if you charge for it? All questions that these ai companies are just ignoring when they potentially have massive ramifications.
Making a random avatar is fine. Using ScarJo is iffy if you’re using it for free. What if you’re streaming on twitch with her? What if you’re charging to use her likeness on twitch where the users will make money? Idk the answers to any of those.
But why would anyone commit anything fraudulent by that? Where exactly does it become “too much” AI?
I did it very iffy to argue that writing in the style of someone else is illegal. That’s a perfectly normal thing to happen. Maybe AI makes it easier, but if an action is not illegal, why would doing the same thing tool assisted be illegal? Doesn’t make sense.
Well, there have been several music lawsuits about certain songs and their amount of identity to others. If you were to write something as closely to another author that you are imitating something like trademark mannerisms there may be a case for that.
I mean depends on where they are from. If they are from the US or Europe they would be fucking idiots but if they are Chinese, Russian, etc they are basically untouchable and it will merely be a game of whackamole.
Edit: welp did a Whois on their website and seems its from Arizona. So yeah nevermind my top comment, if this is truly a company stationed in Arizona they really fucked up.
The bad actors stealing data to train their apps don’t seem to have an adequate understanding of the implications of their actions. They’re just looking to make a quick buck and run.
AI could probably fly under the radar if they just didn’t do stupid stuff like this, but they just have to push the boundaries. If they made any number of fake voices it’d be fine, but no, had to do a celebrity. I hope they lose. Stupid stupid stupid marketing department.
Actually, I think this is a legally very interesting area.
At the end of the day, AIs are just fancy imitations. Nobody would sue someone for imitating a voice, as long as it’s not impersonation (in the legal sense).
I think you misunderstand something. The same thing many AI enthusiasts and critics often choose to not understand. Regenerative AIs aren‘t just born from plain code and they don’t just imitate. They use a ton of data as reference points. It’s literally in the name of the technology.
You could claim „well maybe they used different voices and mixed them together“ but that is highly unlikely, given how much of a wild west approach most regenerative AI services have. it‘s more likely they used protected property here in a way it was not intended to be used. In which case SJ does indeed have a legal case here.
Reference is the wrong word.
They learn the patterns that exist in data and are able to predict future patterns.
They don’t actually reference the source material during generation (barring over itting which can happen and is roughly akin to a human memorizing something and reproducing it).
Weather or not the copyrighted data shows up in the final model is utterly irrelevant though. It is illegal to use copyrighted material period outside of fair use, and this is most certainly not. This is civil law, not criminal, the standard is more likely than not rather than beyond a reasonable doubt. If a company cannot provide reasonable evidence that they created the model entirely with material they own the rights to use for that purpose, than it is a violation of the law.
Math isn’t a person, doesn’t learn in anything approaching the same method beyond some unrelated terminology, and has none of the legal rights that we afford to people. If it did, than this would be by definition a kidnapping and child abuse case not a copyright case.
Yeah it is. Even assuming fair use applied, fair use is largely a question of how much a work is transformed and (a billion images) -> AI model is just about the most transformative use case out there.
And this assumes this matters when they’re literally not copying the original work (barring over fitting). It’s a public internet download. The “copy” is made by Facebook or whoever you uploaded the image to.
The model doesn’t contain the original artwork or parts of it. Stable diffusion literally has one byte per image of training data.
The number of bytes per image doesn’t necessarily mean there’s no copying of the original data. There are examples of some images being “compressed” (lossily) by Stable Diffusion; in that case the images were specifically sought out, but I think it does show that overfitting is an issue, even if the model is small enough to ensure it doesn’t overfit for every image.
It’s a hard one. You train a general AI and ask for a story idea, that’s not a huge deal IMO. You ask it to write in the style of George RR Martin or something that’s something different. Yes you can do it by hand too, but these tools make it easier than ever.
Then sub questions… Is it okay to do it for free? What if you distribute it? What if you charge for it? All questions that these ai companies are just ignoring when they potentially have massive ramifications.
Making a random avatar is fine. Using ScarJo is iffy if you’re using it for free. What if you’re streaming on twitch with her? What if you’re charging to use her likeness on twitch where the users will make money? Idk the answers to any of those.
But why would anyone commit anything fraudulent by that? Where exactly does it become “too much” AI?
I did it very iffy to argue that writing in the style of someone else is illegal. That’s a perfectly normal thing to happen. Maybe AI makes it easier, but if an action is not illegal, why would doing the same thing tool assisted be illegal? Doesn’t make sense.
I think that writing in someone else’s style to an extent that it becomes very obvious is indeed something that raises copyright concerns.
No, how would it? You can’t copyright a style.
Well, there have been several music lawsuits about certain songs and their amount of identity to others. If you were to write something as closely to another author that you are imitating something like trademark mannerisms there may be a case for that.
Identity and style are two completely different things, though.
In literature, there are no trademark mannerisms.
I mean depends on where they are from. If they are from the US or Europe they would be fucking idiots but if they are Chinese, Russian, etc they are basically untouchable and it will merely be a game of whackamole.
Edit: welp did a Whois on their website and seems its from Arizona. So yeah nevermind my top comment, if this is truly a company stationed in Arizona they really fucked up.
The website is hosted out of Arizona, but that is about it.
You can get the full company name from their privacy policy here: https://lisaai.app/privacy.html
If you Google the name, the company is registered in Instanbul: https://www.firmabulucu.com/isletme/convert-yazilim-limited-sirketi/
I think it’s inevitable.
The bad actors stealing data to train their apps don’t seem to have an adequate understanding of the implications of their actions. They’re just looking to make a quick buck and run.
Bring on the lawsuits.
For sure, they’ve been just scraping everything for so long without a care. It’s about time they start facing it.