Being human

Nakul Krishna | Updated on January 16, 2018 Published on December 30, 2016
I, robot?: Evan Rachel Wood in a still from Westworld

I, robot?: Evan Rachel Wood in a still from Westworld   -  Reuters

Nakul Krishna

Nakul Krishna   -  BusinessLine

The new HBO television series Westworld raises anew the old question: who are we prepared to regard as human?

The American network HBO earlier this month concluded an enthralling season of a new science fiction series, Westworld. The show borrows from Michael Crichton’s 1973 film of the same name, its premise of a realistic Wild West-themed amusement park populated by anthropomorphic robots, or ‘hosts’, who allow rich ‘guests’ to act out their cowboy fantasies without fear of serious injury. But it becomes apparent a few minutes into the show’s pilot that the theme park, in Jonathan Nolan and Lisa Joy’s reimagining, will also be the site of a grand metaphysical drama.

The centre of this drama is Dolores, a host apparently coming to something like a fully human consciousness. As played by the gifted Evan Rachel Wood, she carries the weight of the drama’s philosophical themes on her shoulders, as she slowly learns of her own nature as an android. There is little that one can say about a series like this one, loaded with secrets and revelations, without giving away a spoiler or two along the way. It is safest to speak in broad terms.

The show begins with a narrative sleight of hand. A number of early scenes are shown to us, unlike in Crichton’s original film, from the point of view of the androids. It is the human beings, ‘guests’, who are the interlopers in their world. But this already imports a big idea into the proceedings, namely, that the androids have points of view of their own. And by inviting us so early to see things from that point of view, the show makes it hard for us to say something we might later want to say: that the androids are not human, and therefore not conscious or free or anything else that human beings are supposed to be, because they have no point of view to call their own.

Time after time in subsequent episodes, we are given information about the robots that makes it difficult to draw any clear-cut distinction between them and human beings. Is the difference that we are flesh and blood and they are not? But it turns out that they are flesh and blood too, indeed have been designed out of organic materials as a way of cutting costs. Is it that they have no memories of past actions or experiences? Again, it turns out that they do, or come close enough to doing so. Is it that they repeat certain pre-programmed loops every day? But who among us with nine-to-five office jobs see that as a barrier to being counted human? Is it that they are limited in their intelligence? But so are we. Where exactly does the difference lie?

The strength of Westworld is not that it answers this question one way or another. Indeed, it derives its special power from repeatedly deferring any authoritative answer to these questions. The characters themselves are divided and confused about these things, and we soon discover that no character’s pronouncements can be taken at face value. The strength of Westworld is one it shares with all the best science fiction of the last hundred years: that it raises questions that we are inclined to ask in one form or another, but does so in a fictional world where they cannot just be brushed aside as merely theoretical matters.

Once we are sufficiently invested in the characters and their stories, it becomes harder to think in terms of abstractions about consciousness or free will. The big questions get replaced with smaller questions. No one denies that we can desire the androids or fight them; that is, after all, the idea behind the original amusement park. But can we love them, resent them, argue with them? And if we can, what follows for how we must think of them?

The American philosopher Hilary Putnam once remarked, in a discussion of the very idea of an artificial intelligence, “The question that won’t go away is how much what we call intelligence presupposes the rest of human nature.” The androids are presented throughout as a very sophisticated form of artificial intelligence, but the show makes it difficult to treat them as just that: to be intelligent in the way they evidently are, surely they have to be more than just intelligent.

In embodying these uncertainties, the androids of Westworld joins an illustrious tradition of ambiguous humanoids in science fiction: the ‘replicants’ in Ridley Scott’s 1982 classic Blade Runner and the ‘Cylons’ in the 2004 television series Battlestar Galactica. What makes these beings not fully human, we ask. We can only answer that it is the fact that they are not, or not sufficiently, like us. But that only raises an even harder question: what are we like?

(This monthly column discusses questions of morality through pop culture)

Nakul Krishna is a lecturer in philosophy at the University of Cambridge

Published on December 30, 2016
This article is closed for comments.
Please Email the Editor