Google is introducing a new ‘multisearch’ feature enabling users search using text and images at the same time.

“We’re introducing an entirely new way to search: using text and images at the same time. With multisearch in Lens, you can go beyond the search box and ask questions about what you see,” Google explained in a blog post.

To get started, users will need to open up the Google app on Android or iOS, tap the Lens camera icon and either search one of their screenshots or snap a picture based on which they would like to search. They can then swipe up and tap the “+ Add to your search” button to add text.

“With multisearch, you can ask a question about an object in front of you or refine your search by colour, brand or a visual attribute,” it said.

For instance, users can screenshot an orange dress and add the query “green” to find it in another colour, as explained by the tech major. They can snap a picture of their dining set and add the query “coffee table” to find a matching table or take a picture of their rosemary plant and add the query “care instructions.”

“All this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways. We’re also exploring ways in which this feature might be enhanced by MUM– our latest AI model in Search– to improve results for all the questions you could imagine asking,” it further said.

Multisearch will be available as a beta feature in English in the United States, “with the best results for shopping searches. “

Separately, Google recently introduced a new Privacy Guide tool on Chrome to make it easier for users to understand privacy and security controls. Privacy guide is a step-by-step guided tour of some existing privacy and security controls in Chrome.