Explore Google AI search: memes, rushed rollouts, and accuracy issues. Dive into AI overviews and their quirks!
Google's AI Overviews feature is part of its effort to integrate generative AI into search results in a battle with OpenAI for users who start just asking their AI for information. The goal is to provide users with quick summaries of search results, saving time and making information more accessible. You might find this feature useful when you want to get a concise snapshot of what you're searching for without diving deep into multiple sources.
The AI Overviews leverage generative AI models that predict data based on vast amounts of past information. This enables the system to generate coherent and relevant summaries that link back to verified sources, ensuring that you can trust the information provided. But does it work as intended?
Despite the potential benefits, Google's AI Overviews have faced criticism, primarily due case of incorrect information. These mistakes, often referred to as "AI hallucinations," occur when the system generates factually inaccurate content. For example, there have been cases where the AI suggested adding nontoxic glue to keep cheese on pizza or eating a small rock daily. These errors stem from flawed training data, algorithmic mistakes, or misinterpretation of context. SLMs are known to provide bad info when given trick questions, and that is probably the type of model behind the Search, as fact-checking with an LLM model would take up additional computation power and last significantly longer.
Actually, let's pick one of those bits of advice from the account @Goog_Enough and try to fix them in our Playground.
This is a classic trick question, now let's run the answer by an LLM:
It's enough to know it's not true to not show it. However, using bigger models requires more resource spend and slower loading times. Well, we got the models with 7B parameters to consistently identify the lie. Would installing a second agent in the chain be too much? Unlikely to be cheaper than hiring humans to comb through search results...
With new SLMs releasing this month, they prove to be more and more worthy, especially considering Mistral 7B or ongoing extensions of Phi-3. This makes getting such a fail from Google surprising news.
When it comes to search engines, Google is the undisputed leader. Since its debut in 1998, it has grown to dominate the market with an impressive 86% share according to Statcounter. This translates to about 8.5 billion searches per day and a staggering $240 billion in annual advertising revenue. Competitors like Bing, Yahoo, DuckDuckGo, Yandex, and AOL lag far behind.
Despite its dominance, Google faces significant threats from the rise of generative AI. Generative AI models, designed to create content and provide more nuanced responses, are gaining traction. By 2025, an estimated 78 million users in US - a quarter of its population will adopt these models, which could disrupt the search engine landscape.
Generative AI offers capabilities like summarizing text, answering complex questions, and even humanizing AI-generated content. These new advancements could potentially shift user preferences away from traditional search engines. And with the rapid adoption of generative AI, Google's long-held dominance could be at risk. Companies investing in AI technology are pushing the boundaries of what search engines can do, offering more personalized and context-aware responses.
Given the challenges, Google is also integrating AI into its own services to stay ahead. However, accuracy issues and the potential for incorrect responses remain a concern.
Google is actively working on refining its AI systems and content policies to improve accuracy. Here are some strategies being implemented to address incorrect responses:
If you encounter an incorrect AI overview, you can ask AI a question again, providing more context to help the AI generate a more accurate response.
Test those incredible Google answers in our Playground or secure access to 200+ AI Models with AI/ML API key.
Author: Osama Akhlaq.