Employees say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    This is the best summary I could come up with:


    The 51 moderators in Nairobi working on Sama’s OpenAI account were tasked with reviewing texts, and some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest, the petitioners say.

    “We are in agreement with those who call for fair and just employment, as it aligns with our mission – that providing meaningful, dignified, living wage work is the best way to permanently lift people out of poverty – and believe that we would already be compliant with any legislation or requirements that may be enacted in this space,” the Sama spokesperson said.

    In sample passages read by the Guardian, text that appeared to have been lifted from chat forums, include descriptions of suicide attempts, mass-shooting fantasies and racial slurs.

    The announcement coincided with an investigation by Time, detailing how nearly 200 young Africans in Sama’s Nairobi datacenter had been confronted with videos of murders, rapes, suicides and child sexual abuse as part of their work, earning as little as $1.50 an hour while doing so.

    She wants to see an investigation into the pay, mental health support and working conditions of all content moderation and data labeling offices in Kenya, plus greater protections for what she considers to be an “essential workforce”.


    I’m a bot and I’m open source!

    • prd@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      A bot giving a summary of an article about people doing the work to train AI bots is some real snake-eating-its-own-tail shit.

  • The Cuuuuube@beehaw.org
    link
    fedilink
    arrow-up
    58
    ·
    1 year ago

    Cool. Using slave labor to train tools to strip the best parts of humanity away from us so that AI can do creative activities like poetry and art while we’re more and more stuck in a gig economy.

    Cool cool cool cool.

    • Halosheep@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The average cost of living is somewhere around $650 for one person, and those wages are a bit low but the higher end is above average for the country.

      (I don’t know how accurate this data is, but based on https://livingcost.org/cost/kenya/United-States)

      I’m not saying they shouldn’t be paid better or more, but those wages aren’t as outlandish as they sound for the country.

      • Hot Saucerman@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        And here I must be crazy thinking if it is US company paying them, maybe they deserve the equivalent of US employees, no matter what the fucking local pay is.

        That “local pay” bullshit is just an excuse to exploit. Pay them what you would have to pay a US citizen for the same job or fuck right off. They don’t deserve less because of geographic location.

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Only rich people can afford new clothes every month, and if you think computer makers are passing on the savings of their exploitative practices to their customers, I’ve got a bridge to sell you.

      • The Cuuuuube@beehaw.org
        link
        fedilink
        arrow-up
        23
        ·
        1 year ago

        Starvation wages are slavery. And yes. Our techocolonial society engages at it at many levels. No. We should not be okay with it. We should do what we can do disengage from businesses that engage in it, and we should be self forgiving of ourselves when we can’t. We should always be advocating for the workers, even if sometimes that means who we’re advocating for is us

      • TwilightVulpine@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        They can leave the job but they will still carry the psychological scars from it. They were not receiving adequate mental health support for the severity of the content they had to deal with.

        • fred-kowalski@kbin.sh
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          You comment inspired this thought: The older I get, the less I have faith in psychological support making us whole. I still think it should be part of work like this but the damage can be as permanent as losing a limb. What is that worth in money? (hypothetical)

          • TwilightVulpine@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s definitely something to consider. Psychological support helps people with coping but it doesn’t remove the trauma. Anyone willing to do this sort of work deserves to be very well compensated.

            But all that said, it isn’t even unique to AI that there is a need for people to sift through the worst stuff imaginable to prevent everyone else from being exposed. All user-generated internet content has that problem.

      • 0x815@feddit.deOP
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        1 year ago

        Whether or not we call it slavery, it is for sure a gross exploitation of labour. The article reminds us that we in the so-called ‘western world’ can only afford their luxury life because there’s someone elsewhere who pays the price.

        • query@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          People elsewhere pay the price, but it’s not in any way necessary for Western quality of life, because all it “affords” us is massive wealth inequality in our own countries, which leads to other problems like rising housing costs and severely skewed influence in politics.

          Cut down on stupidly high profits for a very small group of people, by not stealing labor, and huge sections of people in other countries can have decent pay and work conditions.

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Cheap labour isn’t the same as slavery… it’s still just a job employees can leave…

        You’re gonna get a lot of flak for that take, even though I agree with most of the rest of your post. Let’s reframe the problem: you’re currently paid a low wage. It’s barely enough to pay for food, rent, and getting to and from your job.

        You want to leave that job for one you know pays better, but it’s farther away. Even if you get the job and have the better income, you would be spending the net gains on the extra costs of commuting: it ends up being a wash.

        You would move closer, but because your current wage doesn’t allow you to save money, you can’t afford the costs of moving, let alone a down payment on a house or a deposit on an apartment.

        You would get better educated so you qualify for better paying jobs, but again, you have no savings from your current job to pay for schooling, and you have no/bad credit to afford a student loan.

        All the problems arrayed against you require money to solve, and because you’re “cheap labor” you’re never able to gather enough money to solve them. You’re forcibly stuck with your current job. They pay you, yes, but you can’t leave. You’re “free” to leave, but that’s just saying you’re free to lose your home and starve. Now none of these problems are unique to Kenya, I could be describing any country with poor Economic Mobility, I could be describing any job or industry. Globalization was important for many reasons, but it has allowed companies to identify the parts of the world where labor is cheapest and pay them… exactly what they’re “worth.”

    • ReCursing@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      tools to strip the best parts of humanity away from us so that AI can do creative activities like poetry and art

      Yeah that’s a bullshit take on AI

      • davehtaylor@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        But that’s exactly what’s happening. Bloodsucking capitalists have decided that AI is a cheaper option than paying people a living wage, so creatives are losing their jobs.

        Instead of actually learning how to create art, shitbag grifters claim theyre “taking the power back from creatives” and doing nothing but stealing from actual creatives to make some sort of soulless synthesis, leaving actual creatives high and dry. For just one example, look at how many publishing outlets have stopped taking submissions because of the overwhelming flood of AI spam.

        All the while people are out here trying to make ends meet and are being forced into shitty, low paid jobs or gig work

        • ReCursing@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Your complaint here is entirely with capitalism and has nothing to do with AI. It’s a legitimate concern, but it;s fart larger problem than new ways of making pictures. Aim your opprobrium at the correct targets

        • Ringmasterincestuous@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I think he/she was referring to the sole purpose of AI being developed is to create an additional subscription model that serves both ads and bullshit, locked to infrastructure of google or metas choice.

    • QHC@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      so that AI can do creative activities

      Let me stop you right there. The current concept of “AI”–otherwise known as Large Language Models because that is really what people are referring to–is not capable of creativity. ChatGPT and things like it just regurgitate stuff they find. They can’t create something new and original

      • Thevenin@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        It is true that LLMs and DPMs do not create, they interpolate – that’s why training data and curation of that data is so critical to begin with. Nevertheless, it is correct to say they are being used for “creative activities” as cheap and (in my opinion) unsustainable substitutes for human minds.

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        ChatGPT and things like it just regurgitate stuff they find.

        It utterly baffles me that people keep repeating this nonsense. Have you ever even bothered to try ChatGPT? At all? If you managed to make ChatGPT actually repeat existing text, congratulations, tell me how, since I never have been able to do that in months of using it. ChatGPT has no access and no way to reproduce the texts it was trained, the only thing it can successfully reproduce are shorts quotes or popular phrases (“May the force be with you”) that are repeated all through pop culture. Everything beyond that it will give you as vague retelling at best. Or simply put: It literally can’t “find” text, since there nothing it can search.

        Same for the creativity, you can complain that the stories it writes aren’t the most interesting ones or still suffer from lack of coherence when they get too long. But you can’t complain that it doesn’t get creative. You can throw literally any topic, item, person or whatever at it and it can weave it into a story. You can make it rhyme while doing so or turn it into haiku or talk like a pirate. And you can do so incrementally, ask it to change characters and locations in the story and it will rewrite it. And when you are out of ideas, you can ask it to come up with some.

        AI discussion is starting to feel like talking to moon landing deniers, just repeating the same nonsense that has already been debunked a million times.

        • Kwakigra@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          The difference between what a human mind does in transforming their nature and experiences through artistic expression and what the machine does by referencing values and expressing them in human language without any kind of understanding is very different. You are right that LLMs don’t literally copy word for word what they find, and they certainly are sophisticated pieces of technology, but what they are expressing is more processed language or images than an act of artistic creation. Less culinary experience and more industrial sausage. They do not have intelligence and are incapable of producing art of any kind. This isn’t to say they aren’t a threat to commodified art in the marketplace because they very much are, but in terms of enrichment or even entertainment the machine is not capable of producing anything worthwhile unless the viewer is looking for something they don’t have to look at for more than a moment or read with any serious interest of the contents. I’m interested in people using LLMs as a tool in their own artistic pursuits, but they have their own limitations as any tool does.

          • Scrithwire@lemmy.one
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Give the AI a body with sense inputs, and allow those sense inputs to transform the “decider” value. That’s a step in the direction of true creativity

            • Kwakigra@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              A step closer to approximating the intelligence of a worm, perhaps. I once looked into where the line is on which anamalia were capable of operant conditioning, which I hypothesize may be the first purpose of a brain, and the line on our present taxonomic hierarchy is among worms (jellyfish do not have sufficient faculties for operant conditioning and are on the other side of the line). Sensory input being associated with decider values is still not as sophisticated as learning to be attracted to beneficial things and avoiding dangerous things because the machine does not have needs or desires to base its reactions on which would have to be trained into it by those with intelligence. I’m not saying it’s impossible to artificially create a being like this, but in my estimation we are very far from it considering that we barely grasp how any brain works other than to be aware of their extreme complexity. Considering the degree of difference between a worm and a sentient human, we are much further from what we would consider a human level of intelligence.

              Edit: Re-reading this it seems much more snippy than I intended and I’m not sure how to frame it to sound more neutral. I meant this as a neutral continuation of a discussion of an idea.

        • QHC@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yes, I use the tools every day and understand how they work. Failing to fully explain the mechanics of LLMs does not materially change the meaning of my original statement.

          There’s a reason that fan fiction is not regarded as true creative art that should be respected and discussed like other mediums: it;s not trying to do something new and original, the whole point is to re-combine and shuffle things around to sound and feel and look like the original work, just more, but not different in any real enough way to matter.

          • lloram239@feddit.de
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            There’s a reason that fan fiction is not regarded as true creative art

            No true Scotsman

            it;s not trying to do something new and original

            Neither are humans. Everything is a remix.

            The only real advantage that humans have in this is that there are 7 billion of us and there is only one of ChatGPT. Everything ChatGPT produces ends up sounding very similar, since all of it is based on the same training data. With humans you get a lot more variety as each of them had their own unique slice of training data.

      • TheBurlapBandit@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        AI is about at creative as Adobe Photoshop is, or a pencil for that matter. A human operating it (no, not txt2img prompting) is where the creativity comes from.

      • tombuben@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        It doesn’t really matter though. It will take away jobs from people in creative industries that only creative people were able to do before. The end result is basically the same.

        • QHC@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Why would AI that can’t be creative take jobs from people that are capable of being creative?

            • lloram239@feddit.de
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              For the same reason that CGI has replaced practical effects: It gives more control to the producers and makes it faster to iterate and change stuff. It doesn’t even need to be cheaper or look better for that.

          • phi1997@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            Because it’s not the AI that’s taking away jobs, but the executives hoping to cut costs regardless of creativity, quality, or ethics.

            • QHC@kbin.social
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              So then blame the real problem, which is not new and has always been the main enemy: capitalism and its demand for seeking profits despite any consequences.

              That has nothing to do with “AI” and still doesn’t have anything to do with the original claim of whether or not the new wave of LLMs are capable of creativity.

              • The Cuuuuube@beehaw.org
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                I am. That’s the thing that I’m blaming. The claim I was making was that OpenAI has engaged in violent colonialism inherent to capitalism with the goal of making Elon rich, and the rest of us poor.

      • sunflower_scribe@beehaw.org
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        1 year ago

        It creates things. Whether it is truly “creative” in the sense that humans are “creative”, doesn’t really matter. Now, you might respond by saying that it only regurgitates, but I would argue that many if not all human creative outputs are, at least to some degree, “regurgitations” in the same sense. I am not disregarding art, just saying that art is always derivative to some degree.

  • AJ Young@beehaw.org
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    To be honest, this isn’t an AI problem, but a content moderation and labor force ethics problem. You can swap out AI with social media and you’ll find the same amount of psychological harm to moderators.

      • AJ Young@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Good question! It’s probably the most psychologically damaging job out there.

        Ironically, maybe with AI? But that would mean we would have to train a model to recognize the worst images and videos possible…which means gathering all of that data…which means whomever trains that model will be scarred…actually, maybe not a good idea…