Google has introduced a new AI-powered addition to its visual search capabilities in Google Lens, enabling users to get answers by pointing their camera or uploading a photo or screenshot, then asking a question about what they’re seeing.
This feature is an update to the multisearch capabilities in Lens, allowing users to search using both text and images simultaneously, offering AI-generated insights and information pulled from the web, including websites, product sites, and videos.
The AI-powered overviews for multisearch in Lens are launching for everyone in the U.S. in English, starting today, and the addition aims to maintain Google Search’s relevancy in the age of AI while acknowledging that it may not always provide accurate or relevant answers due to the nature of web content and AI limitations.