At its annual I/O conference held today, Google announces an upgrade to multisearch that conducts simultaneous queries about objects around you.
With a new feature called scene exploration, Google strives to make search even more natural by combining its understanding of all types of information — text, voice, visuals, and more.
It builds on multisearch in Google Lens, introduced last month, and makes it possible to search entire “scenes” at one time.
Google demonstrated scene exploration with an example of a grocery store shelf, showing how it can instantly identify items that meet a specific set of criteria.
How Does Scene Exploration Work?
When you search visually with Google using multisearch it’s currently able to recognize objects in a single frame.
With scene exploration, rolling out at some point in the future, you’ll be able to use multisearch to pan your camera and glean insights about multiple objects in a wider scene.
In a blog post, Google states:
“Imagine you’re trying to pick out the perfect candy bar for your friend who’s a bit of a chocolate connoisseur. You know they love dark chocolate but dislike nuts, and you want to get them something of quality.
With scene exploration, you’ll be able to scan the entire shelf with your phone’s camera and see helpful insights overlaid in front of you.
Scene exploration is a powerful breakthrough in our devices’ ability to understand the world the way we do – so you can easily find what you’re looking for– and we look forward to bringing it to multisearch in the future.”
During the Google I/O keynote it was stated that scene exploration uses computer vision to instantly connect multiple frames that make up a scene and identify all the objects within it.
As scene explorer identifies objects it simultaneously taps into Google’s Knowledge Graph to surface the most helpful results.
Source: Google
Featured Image: Poetra.RH/Shutterstock