At Google’s recent developer conference, CEO Sundar Pichai demonstrated capabilities of Duplex, its virtual assistant that can make appointments with doctors or perform other similar functions. Amidst the euphoria at its launch, a basic ethical issue cannot be overlooked: does the consumer or end user know she’s talking to a machine?

Quite apart from the concerns raised on the impact of automation on jobs, the issue of holding a machine responsible in case a consumer feels misled is equally important. This is especially the case in “free” or “fremium ” (charge for later) kind of tech which puts the entire onus on the end-user to be aware of the fine-print.

Corporations harvest vast amounts of data to market their goods or services. The case of Cambridge-Analytica taking data from Facebook comes to mind. Multiply that into millions of users who are presented the cool Alexa, Siri, Cortana or Duplex and the problem of non-consensual data transfer gets worse. India currently has 500 million internet users, most of them gullible.

Imagine a scenario where an OTP is hacked without a consumer’s permission and a transaction goes through. How will the financial institution tackle this situation when the principal actor is a machine?

Technologies like voice recognition will evolve at a rapid pace but the larger question is: are lawmakers evolving quickly enough? Experts such as Sam Pitroda believe that governments as well as lawmakers are struggling with this issue. Meanwhile, the buck is likely to be passed on to the consumer.

Algorithms are not above reproach. As political scientist Virginia Eubanks in her recent book “Automating Inequality” has said, data science is not “neutral”. It is rigged against those from disadvantaged sections who need it the most. Hence the need for stronger protection for consumers becomes important .Will Sophia the robot be held legally responsible?

comment COMMENT NOW