Image cropping algorithm for Twitter It tends to prioritize young, slender, fair-skinned faces, explained a researcher at the Ecole Polytechnique Federale in Lausanne, Switzerland.
Bogdan Kulinic was the winner of the award The competition promoted by Twitter itself To examine biases and damages caused by your automated systems. As a reward, he received $3,500.
His research showed that the social networking system prefers faces “thin, young, fair-skinned or warm and fair-skinned, with facial features of feminine stereotypes”.
To reach this conclusion, Kulynych used an artificial intelligence program called StyleGAN2 to generate random images of faces with realistic features. Then, modifications were made to the color and attributes of the skin that would make the model male or female.
Some of the AI-generated images tested by the researcher – Photo: Bogdan Kulinic via GitHub
The images were tested on the Twitter algorithm, which has a “bulge” score to identify the distinct element.
“This bias may lead to the exclusion of minorities and the perpetuation of stereotypical standards of beauty in thousands of images,” the researcher said.
The social network had already realized that there was a problem with the tool that automatically cropped photos to fit into the social network app’s feed and He said he has stopped using it.
Reviews started after Users ran tests By posting photos with a black person at one end and a white person at the other, and reversing the order in the next photo.
Before opening the full picture, the Twitter algorithm showed the white person most often.
Three other research projects were awarded in the Twitter competition:
- The second place showed that the system Used to ignore people with white or gray hair, indicating a bias with age;
- 3rd place Find out The tool preferred images containing English text, with the Latin alphabet, compared to the Arabic alphabet;
- The Most Innovative Project Award showed that The algorithm also favored emojis with lighter skin tones Regarding the characters that represent dark-skinned people.
On his Twitter profile, the award winner said, “Algorithm damage is not just ‘mistakes’ or ‘inadvertent mistakes.’ For him, there is a design issue with the project.
“This begins with maximizing participation and, in general, profit which transfers costs to others,” he wrote Kolinch.
Patrick Hall, an AI researcher and one of the competition jury members, said that biases exist in all AI systems and that companies need to work actively to find them.
This kind of problem doesn’t just happen on social networks. The government’s use of algorithms and artificial intelligence, especially in the field of security, is raising concerns among professionals.
Many of them indicate that there are Less accurate in blacks and Asians In facial recognition systems, for example.
“Web geek. Wannabe thinker. Reader. Freelance travel evangelist. Pop culture aficionado. Certified music scholar.”