More than 40 per cent of Adobe’s customers in India are new users, says Bryan Lamkin, EVP and GM, Digital Media, who guides the tech company’s Creative Cloud and Document Cloud businesses globally. A large chunk of Adobe’s new users are amateurs, social content creators and students, which is why, he told BusinessLine , the company is focussing on “simplifying” the user experience. Lamkin was here recently to sign a statement of intent with Niti Aayog’s Atal Innovation Mission to help develop creative skills and drive digital literacy across all Atal Tinkering Labs in India. Excerpts:

What are the biggest innovations that have happened in the creative cloud and document cloud segments?

I would say the the biggest innovation in the whole area of artificial intelligence is our ‘Sensei’. Adobe has always invested large amounts in research in the areas of graphics, publishing, video and imaging. The exciting thing that we are doing now is taking that heritage and accelerating it dramatically with Adobe Sensei. We are moving from innovation at individual product level to the platform level.

A lot of innovation is also being put to expand beyond the business. There is an explosion of people who want to tell a story digitally. There are students who want to express themselves. Creativity has a big role to play in digital literacy. And a lot of the secret sauce is in making things simpler.

In document technologies, we are applying a lot of science to bring deep semantic understanding of documents, and also bring structure back to them. When you create PDF files, you remove components of the structure and the intent of the document. The structure makes the PDF document accessible, so screen readers can read the content, see summarisations etc., making it useful to navigate the document. This will allow you to do more with the document, especially on a mobile device.

On mobile, what changes have you had to bring about as devices evolve. We saw Samsung and Huawei launching foldable phones recently. How would your product change?

We are not announcing anything today. But the logical thing for us is to understand the form factor and to intelligently allow our products to flow across it. We have a partnership with Samsung on Acrobat, and in particular on a new application called Adobe Scan. That is deeply integrated with Bixby, its AI assistant, to be automatically launched when people want to read documents. Scan not just captures your document but allows it to get into the workflow seamlessly. I could just scan your business card, and populate your contacts on your phone using Scan.

We are also Samsung’s preferred partner for video creation for its new class of devices. In videos, we have recognized a new emerging opportunity in social video creation. So we took a lot of goodness (features) from Adobe Premiere and other applications, and brought it all together in a dramatically-simplified unified application called Adobe Premiere Rush that is aimed at social video creation.

What’s your big bet in the document cloud?

Adobe Sign is the “verb” on which we are putting our money. We have spent a lot of energy driving service innovation in Acrobat. If you think about it, Acrobat is a concatenation of top 30 verbs you do with a document – Edit, Combine, etc. We make money because we make the best verbs. And a “mega verb” for us is Adobe Sign. The digital signature is an accelerated component of the digital transformation. We see it being used in HR while hiring resources, in sales for sewing up contracts, and even in the personal sphere, where parents send back school reports with a digital signature online. It is exploding.

Adobe has primarily been about visual imagery. How are you integrating voice?

We have invested in Adobe XD to build interactive experiences. We started off with visual imagery because most of the web is there. But recently, we acquired SaySpring, a voice platform, and have worked hard on integrating its technology into XD. If you look at interactive experiences, people don’t say, “we want voice or images” — they just want interactivity. So what we are doing is creating a visual experience, enhancing it with voice and then bringing it back to the visuals. It moves back and forth. We have also invested in the core of Sensei to activate many experiences with voice.

comment COMMENT NOW