| AI base of science and technology (rgznai100)
Participate in | Zhou Xiang, reason_W, shawn
With the release of the iPhone X, face recognition using deep learning is expected to become increasingly standard on smartphones. In addition to identification, however, there has been a spate of recent studies examining whether facial swiping can predict personality and even behavior.
At the end of 2016, Professor Wu Xiaolin and his doctoral student Zhang Xi from Shanghai Jiao Tong University published a paper, Automatic Crime Probability Inference Based on facial images. The study suggests that, with learning, machines can tell who is a criminal from a photo and who is a law-abiding citizen with more than 86 percent accuracy.
Can facial features really be used to predict behavior and personality? Is this research really not discriminatory?
The study found that deep neural networks outperformed humans on the task of “identifying homosexuality” with 61 percent accuracy in men and 54 percent in women.
In addition, the typical facial features most likely to be gay men tended to be more feminine, while lesbians tended to be more masculine. Generally, men have a wider jaw, a shorter nose and a smaller forehead; Gay men, on the other hand, have narrower jaws, longer noses, larger foreheads and less facial hair. In contrast, lesbians generally have more male-like faces (wider jaws and smaller foreheads) than heterosexual women. What’s more, gay and straight people do grooming differently.
“From Max: On Sep 10, 2017, at 00:06hello
I just finished your deep learning project on detecting human sexual orientation. I think such a study should be banned. A person’s sexual orientation should be his or her private matter.
You must know that in some countries homosexuality is a crime. So I think you’re a homophobic bastard who supports the murder of gay people. If not, please destroy all work related to this topic, otherwise, I hope someone will kill you, because your work will cause many people to suffer and even die.
Please take up the knife and have a good time!
Best wishes, Max * * * *”
“Dear Max,
You say you read my project, but do you really understand it? Before you send me to my death, would you take a moment to actually read what you wrote to me about wanting another man to die. Judge people based on hearsay, whether you’re LGBTQ or not. LGBTQ=lesbian, gay, bisexual,transgender, queer, you shouldn’t say it.
I would be honored if you actually read my project and would like to provide your thoughts/comments. And I really cherish it. And if, after reading it carefully, you still want me to write my own, then I might take such a valid request more seriously.You can find the file here: https://osf.io/fk3xr/You can also start with my notes:https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U
Warm wishes, Michal”
I. Summary of research results
You must be wrong – this is pseudoscience!
Like any scientific study, our study may have its imperfections. To that end, we have listed some of your concerns and answered them:
-
First, our model trains specifically on fixed features of the face that are hard to change, such as the shape of facial elements. The deep neural network we use is also trained for an entirely different task: recognizing the same person from an image. This helped us reduce the risk of surface differences that the classifiers found between the images of gay and straight faces used in the study, differences that weren’t even related to faces.
-
Second, we performed a secondary validation of the results on external samples.
-
Third, we studied which elements of the face image could be used to predict orientation to ensure that these elements were indeed facial features (and not other factors). As you learned in the paper, even when all the visual information was removed, the classifier could still make fairly accurate predictions based on the contours of the face.
-
Fourth, we only had the classifier detect the face area and removed the background area outside the face from the image. We also checked to make sure the classifier was focusing on facial features rather than background when making predictions. The following thermal map (seen in Figure 3) clearly shows that the classifier detects a portion of the facial area (red) rather than the background (blue)
Figure 3: A thermal map shows the extent to which different given portions of the labeled image can alter the classification results.
Color scales ranging from blue (unchanged) to red (substantially changed) indicate different results. We use 2D Gaussian filtering to smooth the color-coded blocks.
It turns out that gay people tend to be gender-specific — but I know a lot of gender-specific gay men and lesbians!”
Original address:
https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U/edit#