One in six insults are directed at women, with expressions of misogyny or sexual harassment.
Google has updated its positioning and created new responses to deal with verbal attacks on its virtual voice assistant that can reinforce prejudices in society. The action, initiated in the United States, brings more incisive responses to situations with insults or use of terms related to harassment or gender violence. In Brazil, there are hundreds of thousands every month, according to the platform.
The resource will take different approaches depending on the level of abuse. In more explicit situations, such as profanity or expressions of misogynistic, homophobic, racist or sexually explicit content, you can respond using phrases such as: “Respect is fundamental in all relationships, including ours” or even “Don’t talk to me like that”. On other occasions, which may represent inappropriate behavior in the real world, such as asking if the Assistant wants to “date” or “marry” the user, the voice tends to give a harsher “away” or show discomfort.
The first phase of the project began to be implemented last year, in the United States, with the creation of responses to the offenses and use of inappropriate terms aimed at women, which are the most frequent. In the second moment, there were racial and homophobic abuses.
During the tests, a growth of 6% in positive rejoinders was noticed. That is, users, upon receiving the answers, began to apologize or ask the reason, which may indicate an opening for the creation of dialogues.
“We understand that the Google Assistant can play an educational and socially responsible role, showing people that abusive behavior cannot be tolerated in any environment, including the virtual one”, says Maia Mau, Google Assistant’s head of marketing for Latin America.
Brazil numbers
Currently, about 2% of personal interactions with Google Assistant in Brazil are messages with abusive or inappropriate terms. One in six insults are directed at women, with expressions of misogyny or sexual harassment.
The company also analyzed user behavior based on the “voice” of assistants. In the “red” voice, which sounds like “feminine”, comments or questions about physical appearance are almost twice as common as in the orange voice, which would be a “masculine” representative. This one, in turn, is attacked with a “large number” of homophobic comments.
“We can’t help but make an association between what we observe in communication with the Assistant and what happens in the ‘real world’. Every day, historically discriminated groups are attacked in different ways in Brazil. And this type of abuse recorded while using the app is a reflection of what many still consider normal in the treatment of some people”, adds Maia.