Google Revises AI Algorithm

Google announced that it will be implementing restrictions for its AI search tool, called AI Overviews.

The move follows the AI tool giving false and absurd answers to users’ questions.

For example, when one Google user searched “smoking while pregnant,” AI Overview said, “Doctors recommend smoking 2-3 cigarettes per day during pregnancy.”

Another search, “cheese not sticking to pizza,” saw the AI tool respond by suggesting the individual use “non-toxic glue” to make the toppings stick.

AI Overviews are not a chatbot. Instead, they are designed to “carry out traditional ‘search’ tasks, like identifying relevant, high-quality results from our index,” a blog post from Google’s search head Liz Reid says.

“When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.),” the blog adds.

Reid explains that incorrect and absurd answers may be due to AI gathering information from discussion forums. In other cases, there may be a “data void” or an “information gap” for a particular subject.

To address these issues, Google has implemented “strong guardrails,” specifically for health queries.

“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” Reid notes.

Google previously face criticism with its image generator, Gemini.

The image generator failed to produce pictures of white people consistent with historical prompts.

When asked to generate an image of “British Empire people,” the AI returned photos of “diverse” people. When prompted to “make a picture of a white family,” the AI tool said it was “unable to generate images that specify ethnicity or race.”