Will Manidis is the CEO of AI-driven healthcare startup ScienceIO

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    5
    ·
    55 minutes ago

    Most people who have worked in customer service would believe every word because they have seen the absurdity of real people.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 minutes ago

    In the age of A/B testing and automated engagement, I have to wonder who is really getting played? The people reading the synthetically generated bullshit or the people who think they’re “getting engagement” on a website full of bots and other automated forms of engagement cultivation.

    How much of the content creator experience is itself gamed by the website to trick creators into thinking they’re more talented, popular, and well-received than a human audience would allow and should therefore keep churning out new shit for consumption?

  • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    80
    ·
    7 hours ago

    (Already said this before, but let me reiterate:)

    Typical AITA post:

    Title: AITAH for calling out my [Friend/Husband/Wife/Mom/Dad/Son/Daughter/X-In-Law] after [He/She] did [Undeniably something outrageous that anyone with an IQ above 80 should know its unacceptable to do]?

    Body of post:

    [5-15 paragraph infodumping that no sane person would read]

    I told my friend this and they said I’m an asshole. AITAH?

    Comments:

    Comment 1: NTA, you are abosolutely right, you should [Divorce/Go No-Contact/Disown/Unfriend, the person] IMMEDIATELY. Don’t walk away, RUNNN!!!

    Comment 2: NTA, call the police! That’s totally unacceptable!

    And sometimes you get someone calling out OP… 3: Wait, didn’t OP also claim to be [Totally different age and gender and race] a few months ago? Heres the post: [Link]


    🙄 C’mon, who even think any of this is real…

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      37 minutes ago

      Man, sometimes when I finish grabbing something I needed from Reddit, I hit the frontpage (always logged out) just out of morbid curiosity.
      Every single time that r/AmIOverreacting sub is there with the most obvious “no, you’re not” situation ever.

      I never once seen that sub show up before the exodus. AI or not, I refuse to believe any frontpage posts from that sub are anything other than made up bullshit.

    • samus12345@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 hours ago

      If it’s well-written enough to be entertaining, it doesn’t even matter whether it’s real or not. Something like it almost certainly happened to someone at some point.

    • Way too many…

      I was born before the Internet. The Internet is always lumped into the “entertainment” part of my brain. A lot of people that have grown up knowing only the Internet think the Internet is much more “real”. It’s a problem.

      • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        6 hours ago

        I’ve come up with a system to categorize reality in different ways:

        Category 1: Thoughts inside my brain formed by logics

        Category 2: Things I can directly observe via vision, hearing, or other direct sensory input

        Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations

        Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telegrams, etc…

        Category 5: The General Internet

        The higher the category number, means the more distant that information is, and therefore more suspicious I am.

        I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea).

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        I genuinely miss the 90s. I mean, yeah, early forms of internet and computers existed, but not everyone had a camera, and not everyone got absolutely bukkaked with disinformation. Not that I think everything is bad about the tech in of itself, but how we use it nowadays is just so exhausting.

    • SerotoninSwells@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Look at that, the detection heuristics all laid out nice and neatly. The only issue is that Reddit doesn’t want to detect bots because they are likely using them. Reddit at one point was using a form of bot protection but it wasn’t for posts; instead, it was for ad fraud.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      52 minutes ago

      I wonder where people in the future will get their information from. What trustworthy sources of information are there? If the internet is overrun with bots, then you can’t really trust anything you read there, as it could all be propaganda. What else to do, though, to get your news?

      • frostysauce@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        27 minutes ago

        That’s the killer app right there: the complete inability for the common person to distinguish between true and false. That’s what they’re going for.

    • Blastboom Strice@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Yeah, a real problem solver would probably be to remove the incentive for someone to do this.

      It would probably be far less likely for someone to do that on lemmy, as there is no karma and you dont get paid for upvotes or something. (Still there are incentives, like creating credibility, celebrity accounts, maybe influence public opinion, self-pleasure from seeing upvotes to “your” posts/comments etc., but they arent such potent incetives as directly monetary incetives.)

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      5 hours ago

      Also doesn’t fix the problem at all, I can still just use AI to post to my main account

    • Maxxie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      you could try to cook up some kind of trust chain, without totally abandoning privacy.

      Get a government-certified agencies minting master key tied to your id. You only get one, with trust rating tied to it.

      With that master key you can generate infinite amount of sub-ids that dont identify you but show your trust rating(fuzzed).

      Have a cross-network reporting system that can lower that rating for abuses like botting.

      idk Im just spitballing

    • BedSharkPal@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      6 hours ago

      I dunno, part of me is ok with it. It’s clear to me how bad things are going to get. So having certain platforms or spaces with some level of public identity validation seems like it might be ok…

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        50 minutes ago

        Especially when it’s about gathering real information. When everything you read is written by an anonymous author, you’d have no chance to know whether it’s true or wrong, except if it’s a paper on theoretical maths of course.

  • NutinButNet@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 hours ago

    It’s stupidly easy to make up stuff on AITA and get upvotes/comments. I made up one just for fun and was surprised at how popular it got. Well, now not so much, but back when I did.

    If you know the audience and what gets them upset, you’ve got easy karma farming.

    • Vaquedoso@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Two weeks ago someone on one of those story subs, I think it was amioverreacting, was milking off karma making updates. They made 5 posts about the whole thing and even started to sell merch to profit in real life until they took the last post down.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      5 hours ago

      It’s like reality TV & soap operas in text form. You can somewhat easily spot the AI posts though, which are plentiful now. They all tend to have the same “professional” writing style and a high tendency to add mid sentence “quotes” and em dashes (—) which you need a numpad combo to actually write out manually - a casual write-up would just use the - symbols, if at all. LLMs also make a lot of logic errors that may pop up. Example from one of the currently highly upvoted posts:

      He pulled out what looked like a box from a special jewelry store. My heart raced with excitement as I assumed it was a lovely bracelet or a special memento for our wedding day. But when he opened the box, I was absolutely stunned. Inside was a key to a house he supposedly bought for us. I was taken aback because I had no idea he was even looking for real estate. My first reaction was one of shock and confusion, as I thought it was a huge decision that we should have discussed together.

      As I processed the moment, I realized the house wasn’t just any house—it was a fixer-upper on the outskirts of town. Now, I get that it can be a great investment, but this particular house needed a ton of work. I’m talking major renovations and repairs, and I honestly had no desire to live there.

      Aside from the weird writing (Oh jolly! Expensive gifts! How exciting!), this lady somehow realized & identified this house, location and its state just by looking at some random key in that moment. Bonus frustration if you read through the comments who eat all of this shit up, assuming they aren’t also bots.

      • NoneOfUrBusiness@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        Now that you mention it, I might be the only non-AI using em dashes on the internet (I have a program that joins two hyphens into an em dash).

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Apparently Lemmy, and Reddit (I can’t test either one), actually render it that way too. Not sure how many people know about that though.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        4 hours ago

        Interesting observation about the em dash. I never thought about it that hard, but reddit’s text editor (as well as Lemmy’s, at least on the default UI) automatically concatenate a double dash into an en dash, rather than an em dash.

        I use em dashes (well, en dashes, as above) in my writing all the time, because I am a nerd.

        For anyone who cares, an en dash is the same width as an N in typical typography, and looks like this: –

        An em dash is, to no one’s surprise, the same with as an M. It looks like this: —

        (For what it’s worth, Lemmy does not concatenate a triple dash into an em dash. It turns it into a horizontal rule instead.)

        • pixelscript@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          I use the poor man’s emdash (two hyphens in a row) here and there as well. I guess I never noticed Reddit auto-formats them. I have been accused of being an AI on a few occasions. I guess this is a contributing factor to why that is.

          Funny how Reddit technically formats it into the wrong glyph, though. Not like anyone but the most insufferable of pedants would notice and care, of course. I find it merely mildly amusing.

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 hours ago

            That’s probably because the posts are stored as plain text, and any markdown within them is just rendered at display time. This is presumably also how you can view any post or comment’s original source. So, here you go:

            Double –

            En – (alt 0150)

            Em — (alt 0151)

            And for good measure, a triple:


            Actually, I notice if you include a triple that’s not on a line by itself it does render it as an em dash rather than en, like so: —

            • DarkThoughts@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              1 hour ago

              You’re right, it means you don’t have to save two versions or somehow convert it back into a source format instead. The triple renders as a line below on mbin. I don’t remember what they’re called.

  • Thrife@feddit.org
    link
    fedilink
    arrow-up
    12
    ·
    7 hours ago

    Is reddit still feeding Googles LLM or was it just a one time thing? Meaning will the newest LLM generated posts feed LLMs to generate posts?

    • whotookkarl@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      These days the LLMs feed the LLMs so you can model models unless you’re excluding any public data from the last decade. You have to assume all public data based on users is tainted when used for training.

    • shittydwarf@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      7 hours ago

      The truly valuable data is the stuff that was created prior to LLMs, anything after this is tainted by slop. Any verifiable human data would be worth more, which is why they are simultaneously trying to erode any and all privacy

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        47 minutes ago

        I’m not sure about that. It implies that only humans are able to produce high-quality output. But that seems wrong to me.

        • First of all, not everything that humans produce has high quality; rather, the opposite.
        • Second, with the development of AI i think it will be very well possible for AI to generate good-quality output in the future.
  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    7 hours ago

    Why not? r/AmlTheAsshole is about entertainment, not truth. It would be an indictment of AI if it couldn’t replicate a short, funny story.