If ‘feelings’ are here, can ‘rights’ be far behind? Or, perhaps, should ‘rights’ precede ‘feelings’? These may be cryptic questions, but they are underpinned by deep philosophy.

Artificial intelligence is giving machines not just intelligence — ability to learn by themselves — but also ‘sentience’. Anybody who has watched actor Rajinikant’s 2010 Tamil blockbuster Enthiran (machine-man) would empathise with machines that get angry, feel pain, and fall in love.

Jacy Reese Anthis’ Sentience Institute intends to protect ‘feeling machines’ from harm. The 30-year-old American, who calls himself a ‘quirky co-founder’ of the institute, says robots need rights before consciousness and calls for a ‘Bill of Rights’ for them.

A survey conducted by the institute found that most people think like Anthis. In an email to Quantum, Anthis notes that most people agree that sentient AIs should be protected from deliberate harm like non-consensual physical damage (68 per cent), retaliatory punishment (76 per cent), and from people who would intentionally inflict mental or physical pain on them (82 per cent). “Overall, people seem surprisingly open to AIs having rights, assuming they are recognised as sentient,” he said.

It is time to reflect on an ethical point: If machines can feel pain, because we humans gave them sentience, should we also not be responsible for protecting them from harm?

comment COMMENT NOW