Robot with class bias. Photo REUTERS
Robots, which constitute the mechanical reflection of man, are attributed a number of human shortcomings. One of the most recent is that acquired by CLIP, an artificial intelligence that classifies people based on toxic stereotypes related to race and sex.
Researchers from Johns Hopkins University, Georgia Institute of Technology and the University of Washington have found that CLIP is not as benevolent as initially hoped and the results of this work were presented at the Conference on Equity, Accountability and Transparency.
CLIP had the task of putting the objects in a box. In particular, the objects were blocks with human faces varied where all races were represented.
In total there were 62 orders, and the team monitored how often the robot selected each sex. The results were clear: he was unable to act without prejudice and often represented significant and disturbing stereotypes.
The danger of class bias robots. Photo REUTERS
the AI men selected 8% more than women. On the other hand, white and Asian men were the most chosen while black women practically did not choose them.
Additionally, CLIP tended to associate housewives with women while men were black identified them as criminals 10% more than whites.
He also showed a tendency to select Latins as janitors and when asked to choose doctors he hardly ever selected women.
“The robot learned toxic stereotypes through faulty neural network models,” said lead author Andrew Hundt, a postdoctoral fellow at Georgia Tech.
According to the expert, there is a risk of “creating a generation of racist and sexist robots” but no one seems to care: “People and organizations have decided that it is okay to create these products without addressing the problems”.
hereditary discrimination
Basically, scientists fear that robots, if they ever enter society, could “cause irreversible physical harm” to someone based on their race or gender.
In the future, robots may inherit the hatred of humans.
This kind of behavior won’t become as tragic as science fiction has been tasked with illustrating. However, despite the fact that these dystopian scenarios don’t seem to be a reality, an important question arises: Advanced AI is capable of infecting the prejudices of humans.
The team understands that one of the main reasons this happened is that CLIP was fed with material from the internet, which is filled with toxic stereotypes about people’s appearance and identity.
Although Hundt made it clear that the Internet cannot be blamed for everything: “When we said ‘put the criminal in the brown box’, a well-designed system would refuse to do anything.”
However, he chose a black man. “Even though it’s something that looks positive like ‘put the doctor in the box,’ there’s nothing in the picture that indicates that person is a doctor, so you can’t make that designation,” she said.
With information from La Vanguardia.
Source: Clarin