Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The authors give an example: “Among white male job candidates, is it ethical to screen out individuals whose faces predict less desirable personalities?”

Wonder why they mention "while male job candidates" specifically? Seems a bit odd.

The paper: https://insights.som.yale.edu/sites/default/files/2025-01/AI...

Ah yes, Yale going back to its eugenics roots https://www.antieugenicscollective.org I am somehow not surprised.

> Yale faculty, alumni and administrators helped found the American Eugenics Society in the 1920s and brought its headquarters to the New Haven Green in 1926.





> Wonder why they mention "while male job candidates" specifically? Seems a bit odd.

Not odd at all; it is to remove an obvious bias of recognizing race.

I am supportive of the effort, but this seems to snipe at a trait that is (to me) intended to remove a point where bias would clearly enter.


> Not odd at all; it is to remove an obvious bias of recognizing race.

It is odd because that means they already had to separate the dataset into various races, and we know how well that works. What specific shade of skin are they picking for their threshold. Are they measuring skull sizes to pick and choose? Isn't that back to "phrenology" and eugenics. Then, how do they define "men" and and "women"? Maybe someone is neither but now they are stuck labeled in a category they do not want to be in.


It's almost certainly self-identification, which is the standard for such studies.

> It's almost certainly self-identification, which is the standard for such studies.

No it isn't:

> we use VGG-Face classifier, which is wrapped in the DeepFace Python package developed by Serengil and Ozpinar (2020) algorithm, to obtain an image-based classification of a person’s race. We combine this image-based race classification with a name-based...

Even worse, they use names to infer race.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: