bot@lemmy.smeargle.fansB to Hacker News@lemmy.smeargle.fans · 6 months agoChatGPT is biased against resumes with credentials that imply a disabilitywww.washington.eduexternal-linkmessage-square6fedilinkarrow-up139arrow-down12file-text
arrow-up137arrow-down1external-linkChatGPT is biased against resumes with credentials that imply a disabilitywww.washington.edubot@lemmy.smeargle.fansB to Hacker News@lemmy.smeargle.fans · 6 months agomessage-square6fedilinkfile-text
minus-squareLvxferre@mander.xyzlinkfedilinkarrow-up10·6 months ago studies how generative AI can replicate and amplify real-world biases Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse. I’m not sure but I think that this is caused by two things: It’ll spam the typical value unless explicitly asked contrariwise, even if the typical value isn’t that common. It might take co-dependent variables as if they were orthogonal, for the sake of weighting the output.
Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse.
I’m not sure but I think that this is caused by two things: