Google has launched a new way of searching by image with Google Lens named “multisearch.” Multisearch lets searchers search by image and then refine that image search with a text query on top of it. And just to be clear, while Google says this is done using AI, Google is not using MUM at this point for this feature.
It is pretty cool how it works, you open your iOS and Android Google Search app, snap a photo using the Google Lens button or upload a photo or screenshot from your photo library and then you can swipe up to add additional text. Google explains open up the Google app on Android or iOS, tap the Lens camera icon and either search one of your screenshots or snap a photo of the world around you, like the stylish wallpaper pattern at your local coffee shop. Then, swipe up and tap the “+ Add to your search” button to add text.
Here is a GIF showing how it works – it can be confusing when you try to swipe up – Google will have to work on that:
Here are two images showing the flow in a static format, click on them to enlarge:
Here are some questions Google multisearch can answer:
(1) Screenshot a stylish orange dress and add the query “green” to find it in another color
(2) Snap a photo of your dining set and add the query “coffee table” to find a matching table
(3) Take a picture of your rosemary plant and add the query “care instructions”
This should work in the US English Google Search app and Google said try to focus on shopping/clothing searches.
And yes, this is a feature Google demoed at the Search On event last year.
Forum discussion at Twitter.