Google adds AI-powered overviews for multisearch in Lens

Key Points:

  • Google introduced an AI-powered upgrade to its visual search capabilities in Google Lens, allowing users to ask questions about what they see and receive AI-generated answers.
  • The AI-powered overviews for multisearch in Lens, with insights pulled from the web, are made accessible to all users in the U.S. in English, aiming to offer improved search results but acknowledging the limitations of AI-generated answers.
  • Google’s AI advances, including the new multisearch results and gesture-based Circle to Search, seek to enhance search capabilities but will maintain transparency and accuracy, with a focus on citing sources for its answers.

Summary:

Google has introduced a new AI-powered addition to its visual search capabilities in Google Lens, enabling users to get answers by pointing their camera or uploading a photo or screenshot, then asking a question about what they’re seeing.
This feature is an update to the multisearch capabilities in Lens, allowing users to search using both text and images simultaneously, offering AI-generated insights and information pulled from the web, including websites, product sites, and videos.
The AI-powered overviews for multisearch in Lens are launching for everyone in the U.S. in English, starting today, and the addition aims to maintain Google Search’s relevancy in the age of AI while acknowledging that it may not always provide accurate or relevant answers due to the nature of web content and AI limitations.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon