• 1 Post
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle



  • Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.

    Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.

    To be clear, I ask why not cut out the middleman if they’re just serving me AI content.

    I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.

    The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.









  • This is perhaps the most significant indicator of bad faith decisions by conservatives.

    It’s like gun regulation. A functioning, pro gun, political party would propose gun control regulations which achieve and addresses concerns, while maintaining and satisfying the fundamentals of gun ownership. Advocacy groups, like the NRA, would then have involvement and assurance. They shouldn’t instead advocate for no solution whatsoever: The only possible result of which will be an eventual critical anti gun majority with following blanket fire arm bans. Or occasional, disruptive bans on specific weapons.




  • I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.

    It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.

    From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.

    Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?