“Hey Cortana, tell me a joke!” “Siri, what is the weather like?” “Alexa, can you play my favourite song?”

These are some questions people ask Artificial Intelligence (AI)-powered virtual assistants. And what they get in return are informed, measured and sometimes humorous responses. Some of these virtual assistants have evolved to a point where one can have an actual conversation with them seamlessly.

But what goes into giving these assistants their voice and a personality? At a recent interaction, Microsoft’s Content Experience Managers shared some details on this arduous creative journey.

“Research shows that people can get intimidated when interacting with AI,” said Jonathan Foster, Principal Content Experiences Manager, Windows and Content Intelligence. “Cortana (Microsoft’s virtual assistant) is a celebration of the individual and tries to put the person at ease to avoid a top-down kind of feel,” he added.

Human-like quality

Deborah Harrison, Senior Content Experience Manager, Conversational UI and Intelligence, clearly laid out the fact that they are not trying to represent AI as a person. “We are designing a human-like quality. We are not trying to be human,” she said.

But how do they draw the line between giving them human-like conversational traits and aping full human interaction? “We reject the Turing Test outright,” Foster said. The Turing Test is a test that gauges a machine’s ability to exhibit human-like intelligent behaviour. “We are not creating robots or entities. We focus on it as an interaction model and the personality as a design around that model,” elaborated Foster.

Given the current world we live in, where people can get offended at the drop of a hat, political correctness is also a factor that writers consider when designing responses. “We don’t own language. It is iterative. I think people get angry about political correctness because they want to own it,” Foster said.

Offensive responses

According to Foster, writers are looking out for responses that may be deemed inappropriate or offensive, and such responses can be easily taken down. “We can pull down our responses in 24 hours,” he said.

“We do not engage in abusive language or make it an abusive game wherein people would expect a funny response to abusive language,” Harrison further said. For example, the designed response from the bot for an abusive statement simply said, “Moving on” or “Lets move on”.

Microsoft has also developed a standalone personality suite for bot developers, which a developer could take as is and plug it in to the bot they are developing. Three personality types were demonstrated by Microsoft, and they can be tried on the Micrsoft Cognitive Services Labs’ website — Friend (affectionate), Professional (like a concierge at a hotel) and Comic (playful).

comment COMMENT NOW