I had a friend who’s niece, an American, was able to travel to use these. It was a difficult path to research and get these services, as well as expensive, but it definitely helped them a lot.
I had a friend who’s niece, an American, was able to travel to use these. It was a difficult path to research and get these services, as well as expensive, but it definitely helped them a lot.
Man, reading the hacker news comments is grim. A deeply cynical and shallow series of takes on an interesting subject.
Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.
Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.
To be clear, I ask why not cut out the middleman if they’re just serving me AI content.
I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.
The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.
This article is actually an interesting critique of the original studies analytics which suggest that such an effect perhaps doesn’t actually exist, or at least was not demonstrated scientifically.
I keep reading his name as David D Pepe in my head. Far too fitting.
My hope is that the mechanization of the written word / artistry will result in such a deludge of low tier nonsense that the people of earth will just stop using the Internet.
Then it can just be me and you ❤️
Interesting! I didn’t follow this case, but I do remember Kevin spacey posting a very strange video a ways back in which he acted… Very creepy about the situation.
Anyone following the case have any thoughts?
The best thing about Ben Shapiro is that each day I share on this planet with him is one less day I need to coexist with Ben Shapiro.
Fuck Republicans, but just for a sanity check, is it normal to say “people of color?” As in, “The judicial system is biased against people of color.” That’s in my verbal lexicon, and I’m suddenly questioning it.
Slurs are so interesting, being on a broad shifting scale based on contextual usage. I think it’s interesting, for example, that “handicapped” has become a slur in my lifetime through it’s general misuse.
In a world where arguably the second most advanced LLM on the planet (either gpt3.5 or Bing’s openai implementation) is completely free to use, why would I want to read anything on your website that wasn’t researched by a human?
I wish I could I could sear this question into every CEOs brain.
This is perhaps the most significant indicator of bad faith decisions by conservatives.
It’s like gun regulation. A functioning, pro gun, political party would propose gun control regulations which achieve and addresses concerns, while maintaining and satisfying the fundamentals of gun ownership. Advocacy groups, like the NRA, would then have involvement and assurance. They shouldn’t instead advocate for no solution whatsoever: The only possible result of which will be an eventual critical anti gun majority with following blanket fire arm bans. Or occasional, disruptive bans on specific weapons.
This is an interesting take. I suppose in hindsight it was naive of us to think the government wouldn’t catch on and track / tax it.
I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.
It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.
From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.
Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?
Fun fact, be careful around exposed roots from fallen trees, especially if people are messing around nearby. There can be a lot of tension stored in the roots trying to stand even a long dead stump back up / gravity, and if something gives, you can become trapped under the tree.