> Not odd at all; it is to remove an obvious bias of recognizing race.
It is odd because that means they already had to separate the dataset into various races, and we know how well that works. What specific shade of skin are they picking for their threshold. Are they measuring skull sizes to pick and choose? Isn't that back to "phrenology" and eugenics. Then, how do they define "men" and and "women"? Maybe someone is neither but now they are stuck labeled in a category they do not want to be in.
> It's almost certainly self-identification, which is the standard for such studies.
No it isn't:
> we use VGG-Face classifier, which is wrapped in the DeepFace Python package developed by Serengil and Ozpinar (2020) algorithm, to obtain an image-based classification of a person’s race. We combine this image-based race classification with a name-based...
It is odd because that means they already had to separate the dataset into various races, and we know how well that works. What specific shade of skin are they picking for their threshold. Are they measuring skull sizes to pick and choose? Isn't that back to "phrenology" and eugenics. Then, how do they define "men" and and "women"? Maybe someone is neither but now they are stuck labeled in a category they do not want to be in.