Google Is Making Search Safer...and Less Awkward
Ever typed something innocuous into Google Search and found something horrifying? Google wants to change that. In a move that will help make search more ethically responsible, the search engine company has told Reuters that it has cut explicit search results for certain terms. It is also improving its AI for certain sensitive searches to help people find a way out of crises.
In an interview with Reuters on 30 March 2022, Tulsee Doshi, Head of Product for Google’s Responsible AI Team, said that the company has succeeded in reducing explicit search results by as much as 30% for terms covering ethnicity, sexual preference and gender. The move aims to prevent incidents such as one involving U.S. actress Natalie Morales, who tweeted in 2019 that she was typing the search term “Latina teenager” into Google for a research project and the highest-ranked searches were all pornography.
According to Doshi, the company is rolling out new AI software called BERT to resolve the problem of explicit search results when people search for historically sexualised terms. The software is meant to tell the difference between someone looking for more racy material and someone who is not. Doshi admitted that explicit results for such terms could be shocking to new users, and this has been an area of concern for a long time. The Reuters report notes that over the years, Google has had to respond to the issue of sexually explicit results for such search terms as “hot” or “CEO”.
The report also says that Google’s new MUM (Multitask Unified Model) algorithm will be calibrated to help people in dire situations find the resources they need to get out of them. The MUM algorithm was launched in May 2021, to help people find the information they need, even with complex queries. The algorithm’s potential to retrieve such results, says the Responsible AI Team, will be helpful for people in situations such as suicidal ideations, sexual assault or domestic abuse.
The team says that searching for such things as, for example, suicide jumping spots will highlight suicide prevention resources in the area. The MUM algorithm can also deal with complex statements often connected with sensitive situations, particularly what therapists call “negative self-talk”, that are typed into the search box. With such innovations in place, people who are in need will be led in the direction of what they need to get out of the pit.
This announcement by Google is its latest attempt to rebuild its ethical AI credibility, following a series of controversial incidents in 2021 involving the firing or removal of key researchers and reports of falling morale among researchers working to improve Google’s search engine more responsibly. The Responsible AI department, which announced the recent developments, was created by Google as a result of last year’s row.
If you or a loved one need someone to talk to, you can contact Samaritans of Singapore at its 24-hour hotline, 1-767. An online search will also lead you to resources in your country for support.
Google has announced that it has managed to reduce the number of explicit search results for historically sexualised search terms, especially those based on gender, ethnicity and sexual preference, by as much as 30%.
The search engine giant also said that its recently released MUM algorithm will help persons entering search terms relating to dire situations such as domestic violence and suicide to find the resources they need.
Google’s announcement is a major achievement for the company, which has been dogged by rows over its ethical AI research.