There’s a difference between thinking that male victims of domestic abuse don’t deserve sympathy or support, and thinking that it’s okay for “useful search results” to vary according to gender.
The results are biased because the danger is biased.
It’s an unfortunate reality that information about domestic violence is more likely to be useful to a woman concerned about an angry male partner than the opposite.
That doesn’t minimize the suffering of men who do need support, it’s just putting information more likely to be useful first.
The domestic abuse hotline is the second result, and the first one also affirms that everyone deserves to feel safe in their relationships, and to calm the hotline if you don’t.
“My wife/husband hit me” both yield the same results, which makes sense.
I’m not sure what you’re talking about. One result affirms that you should feel safe and provides a hotline, the other starts with outright victim-blaming. The second result under “Maybe it’s your fault for not listening?” is not a hotline, at least for me.
My point is that if they just made the result the same then it would not detract from women, nor would it hurt the men who don’t need the advice. You’re going out of your way to defend an unnecessary bias by claiming it’s more relevant, but that’s not the point. They could choose to just not have the bias, and it would be a win while hurting no one.
My point is that it’s not an unnecessary bias, it’s different results for different queries.
Yes, I am going out of my way to say that treating an issue with a 1 in 3 incidence rate the same as one with a 1 in 10 incidence rate isn’t a necessary outcome to ensure an automated system has.
Providing relevant information is literally their reason for existence, so I’m not sure that I agree that it’s not the point. There isn’t some person auditing the results; the system sees the query and then sees what content people who make the query engage with.
I don’t see the system recognizing that a threshold of people with queries similar to one engage with domestic abuse resources and tripping a condition that gives them special highlighting, and a people with queries similar to another engaging with dysfunctional relationship resources more often is a difference that needs correction.
I’m not sure what to tell you about different results. I searched logged out, incognito, and in Firefox.
Those top bar things… are literally audited answers from Google. They’re outside the normal search results and moves the actual result completely in the UI. Someone at google literally hard coded that anything returning results relating to womens domestic violence should present that banner.
That’s not how it works. They code a confidence threshold that the relevant result will have to do with domestic violence in general. That’s why it provides the same banner when the result is more unambiguously relevant to domestic violence.
None of this is the same as a person auditing the results.
There’s a difference between thinking that male victims of domestic abuse don’t deserve sympathy or support, and thinking that it’s okay for “useful search results” to vary according to gender.
The results are biased because the danger is biased.
It’s an unfortunate reality that information about domestic violence is more likely to be useful to a woman concerned about an angry male partner than the opposite.
That doesn’t minimize the suffering of men who do need support, it’s just putting information more likely to be useful first.
The domestic abuse hotline is the second result, and the first one also affirms that everyone deserves to feel safe in their relationships, and to calm the hotline if you don’t.
“My wife/husband hit me” both yield the same results, which makes sense.
I’m not sure what you’re talking about. One result affirms that you should feel safe and provides a hotline, the other starts with outright victim-blaming. The second result under “Maybe it’s your fault for not listening?” is not a hotline, at least for me.
My point is that if they just made the result the same then it would not detract from women, nor would it hurt the men who don’t need the advice. You’re going out of your way to defend an unnecessary bias by claiming it’s more relevant, but that’s not the point. They could choose to just not have the bias, and it would be a win while hurting no one.
My point is that it’s not an unnecessary bias, it’s different results for different queries.
Yes, I am going out of my way to say that treating an issue with a 1 in 3 incidence rate the same as one with a 1 in 10 incidence rate isn’t a necessary outcome to ensure an automated system has.
Providing relevant information is literally their reason for existence, so I’m not sure that I agree that it’s not the point. There isn’t some person auditing the results; the system sees the query and then sees what content people who make the query engage with.
I don’t see the system recognizing that a threshold of people with queries similar to one engage with domestic abuse resources and tripping a condition that gives them special highlighting, and a people with queries similar to another engaging with dysfunctional relationship resources more often is a difference that needs correction.
I’m not sure what to tell you about different results. I searched logged out, incognito, and in Firefox.
Those top bar things… are literally audited answers from Google. They’re outside the normal search results and moves the actual result completely in the UI. Someone at google literally hard coded that anything returning results relating to womens domestic violence should present that banner.
That’s not how it works. They code a confidence threshold that the relevant result will have to do with domestic violence in general. That’s why it provides the same banner when the result is more unambiguously relevant to domestic violence.
None of this is the same as a person auditing the results.