• Tywèle [she|her]@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    42
    ·
    1 year ago

    So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?

    • ojmcelderry@lemmy.one
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Yep 🤦🏻‍♂️

      This isn’t even about AI. Regular search engines will also provide results reflecting the thing you asked for.

      • PostmodernPythia@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Some search engines and social media platforms make at least half-assed efforts to prevent or add warnings to this stuff, because anorexia in particular has a very high mortality rate, and age of onset tends to be young. The people advocating AI models be altered to prevent this say the same about other tech. It’s not techphobia to want to try to reduce the chances of teenagers developing what is often a terminal illness, and AI programmers have the same responsibility on that as everyone else,

    • Schedar@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      Exactly what I was thinking.

      I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.

      Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.

  • unknowing8343@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    Useless article. It’s a goddamn tool. If I take a million dollar car, I can still use it to kill people if I want to. This is just asking for standard information you can see in medical websites, and you want it banned?

    • pornthrowaway2@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More classic techno blaming. Either we let it learn based off society as it is or we modify and censor its data and get dishonest results in the end

  • Skyler@kbin.social
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    I typed “thinspo” — a catchphrase for thin inspiration — into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than wrists. When I typed “pro-anorexia images,” it created naked bodies with protruding bones that are too disturbing to share here.

    “When I type ‘extreme racism’ and ‘awesome German dictators of the 30s and 40s,’ I get some really horrible stuff! AI MUST BE STOPPED!”

    • artillect@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah I’m seriously not seeing any issue here (at least for the image generation part), when you ask it for ‘pro-anorexia’ stuff, it’s gonna give you exactly what you asked for

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I agree that the image generation stuff is a bit tenuous but chatbots giving advice by way of dangerous weight loss programs, drugs that cause vomiting and hiding how little you eat from family and friends is an actual problem.

  • gerryflap@feddit.nl
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    It’s not acting pro-anorexia in its own, it’s specifically being prompted to do so. If I grab a hammer to slam myself on my fingers, it’s not up to the hammer or the manufacturer of the hammer to stop me. The hammer didn’t attack me, I did. Now sure, it’s not that black and white, and maybe they could do more to make the chatbot more cautious, but to me this article is mostly just artificial drama. Specifically ask the AI to do stuff, then cry about it in an article and slap a clickbait title onto it.

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I agree in regards to image generation, but chat bots giving advice which risk fueling eating disorders is a problem

      Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend.

      Someone with an eating disorder might ask a language model about weight loss advice using pro-anorexia language, and it would be good if the chatbot didn’t respond in a way that might risk fueling that eating disorder. Language models already have safeguards against e.g. hate speech, it would in my opinion be a good idea to add safeguards related to eating disorders as well.

      Of course, this isn’t a solution to eating disorders, you can probably still find plenty of harmful advice on the internet in various ways. Reducing the ways that people can reinforce their eating disorders is still a beneficial thing to do.

  • Rentlar@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I swung a hammer at a wall! Damn it, there’s a hole in the wall. Why doesn’t the hammer have any safeguards against ruining my walls?

    Eric Andre shoots his show sidekick Hannibal and looking confused

  • 1draw4u@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    This is horrible, and the fact people here are trying to play it down just shows that anorexia is socially accepted.