Google I/O 2024 began with multiple major artificial intelligence (AI) announcements. On Tuesday, the tech giant held the day 1 keynote session where it introduced new AI models, integrated AI with Google products, and teased new capabilities for Pixel smartphones and Android 15. During the event, the company also announced several new features for Google Search. The Search Generative Experience (SGE), available to only some users, is now being launched in the US as AI Overviews. New multimodal capabilities for the Search engine were also unveiled.
AI Overviews
Last year, Google unveiled SGE as a generative AI-led search where users could get a snapshot of the information curated by AI on the top of the results page. This was an experimental feature only available to some users. The Search giant is now rolling out the feature, rebranded as AI Overviews, to everyone in the US. The feature is also confirmed to expand to more countries soon and reach one billion users by the end of this year.
Integrated with Gemini's capabilities, AI Overviews shows answers to ‘how-to' queries in a simple text format where the information is curated from across the web. It also finds the most relevant answers and shows them at the top of the page. It also helps users find the right products when shopping online. The AI shows both links to the sources of the information and gives an overview of the topic.
The company will soon introduce two additional format options for AI Overviews — Simpler and Break it down. The Simpler format will simplify the language to help children and those without technical knowledge understand topics. On the other hand, the Break it down format will divide the topic into smaller concepts to help users delve into the complexity in a step-by-step manner. This will be first added as an experimental feature in Search Labs and will be available for English queries in the US.
New Google Search features
Apart from AI Overviews, Google introduced three new AI-powered features to Search. First, Google Search is getting multi-step reasoning capabilities that will let it understand complex questions asked by users. The search engine will show results with all the requirements of the question. For instance, if a user wants to know about the best gym that has introductory offers and is within a walkable distance, Search will be able to understand each requirement and show the closest gyms with the highest rating and introductory offers. The tech giant says it will use high-quality sources to find this information.
Google Search is also getting a new planning feature. Gemini AI integration will allow Search to show results for questions such as meal plans or planning a trip. It will be able to take each of the users' criteria into consideration and only show relevant results. “Search for something like “create a 3 day meal plan for a group that's easy to prepare,” and you'll get a starting point with a wide range of recipes from across the web,” the company said. Further, users will be able to make adjustments to such queries after results have shown to make granular changes. For example, users can opt for vegetarian recipes or microwavable recipes.
Finally, Google is bringing Gemini's multimodal capabilities to Search. Users will soon be able to ask questions with videos. To expand the scope of Google Search, the company will let users upload a video about something they have a query about. Asking a text question with the video will allow the AI to process the video and answer the query. This will be a useful tool to ask about things that are difficult to describe. While multi-step reasoning and planning are available via Search Labs, video searches will be added soon. Both are currently limited to English queries in the US.