What I don’t miss from reddit is the bot comments. Not the novelty bots that reply if your comment is in alphabetical order or something like that, but actual chat gpt responses to regular posts or comments.
I have no idea what the point of them is, but they’re awful.
u/OP: “What’s a good mouse for a mix of productivity and gaming”
u/DefinitelyNotABot: “A good mouse for a mix of productivity and gaming is something you should be looking for if you need a mouse that is good for work and play. A good mouse for productivity and gaming will have a good balance of performance and features. Fortunately, finding a good mouse for a mix of productivity and gaming is not difficult due there being plenty of mouse options available to you[…]”
What’s the point of those types of bots, though? I guess they’re just trying to have vaguely genuine looking accounts to use for some future purpose, maybe political or advertising related.
My best guess is karma farming so they can sell them off. Probably create hundreds of accounts they can sell off. If some brand buys an account that looks reputable for $20-$100 to use it for astroturfing then that’s a pretty good deal for both sides. Shitty deal for everyone else.
That and building convincing personas for propaganda and PR. An account agreeing with pro corporate policies is more convincing if it has a long post history.
Running such a bot with an intentionally underpowered language model that has been trained to mimic a specific Reddit subculture is good clean absurdist parody comedy fun if done up-front and in the open on a sub that allows it, such as r/subsimgpt2interactive, the version of r/subsimulatorgpt2 that is open to user participation.
But yeah, fuck those ChatGPT bots. I recently posted on r/AITAH and the only response I got was obviously from a large language model… it was infuriating.
What I don’t miss from reddit is the bot comments. Not the novelty bots that reply if your comment is in alphabetical order or something like that, but actual chat gpt responses to regular posts or comments.
I have no idea what the point of them is, but they’re awful.
Those were getting so out of hand.
u/OP: “What’s a good mouse for a mix of productivity and gaming”
u/DefinitelyNotABot: “A good mouse for a mix of productivity and gaming is something you should be looking for if you need a mouse that is good for work and play. A good mouse for productivity and gaming will have a good balance of performance and features. Fortunately, finding a good mouse for a mix of productivity and gaming is not difficult due there being plenty of mouse options available to you[…]”
What’s the point of those types of bots, though? I guess they’re just trying to have vaguely genuine looking accounts to use for some future purpose, maybe political or advertising related.
My best guess is karma farming so they can sell them off. Probably create hundreds of accounts they can sell off. If some brand buys an account that looks reputable for $20-$100 to use it for astroturfing then that’s a pretty good deal for both sides. Shitty deal for everyone else.
That and building convincing personas for propaganda and PR. An account agreeing with pro corporate policies is more convincing if it has a long post history.
They’ll sooner or later appear here too, that’s sadly a reality all social media will suffer from now on. No filter can detect AI replies for long.
Running such a bot with an intentionally underpowered language model that has been trained to mimic a specific Reddit subculture is good clean absurdist parody comedy fun if done up-front and in the open on a sub that allows it, such as r/subsimgpt2interactive, the version of r/subsimulatorgpt2 that is open to user participation.
But yeah, fuck those ChatGPT bots. I recently posted on r/AITAH and the only response I got was obviously from a large language model… it was infuriating.
First rule of AITAH:
If the story isn’t completely made up, no one will believe it’s true.