Japan, the land where you can rent yourself a friend or even the occasional grandparent, has now earned itself another notch on its cap. Among the longlisted novels for the country’s prominent Nikkei Hoshi Shinichi Literary Award was The Day A Computer Writes A Novel . This piece of metafiction was co-written by an AI (artificial intelligence), which did the ‘writing’ after its developers — Hitoshi Matsubara and his team at Future University Hakodate — set parameters for sentence construction and let the AI run wild, so to speak.

The Day A Computer Writes A Novel was one of two submissions from Matsubara and co for the award, which has a blind reading process: the judges do not know the identity of the author(s). The novel did not go on to win, but the mere fact that it passed the first round of screening, spread like wildfire these last couple of weeks on the internet. No job is safe, op-ed writers lamented. Why are we so disturbed by the idea of an AI producing literature, as opposed to, say, an AI controlling sophisticated weaponry (which happens all the time, one might add)?

There are, I’d argue, two principal reasons for this fear-and-loathing response. The first is the crudely deployed logic that robots or AIs are no good for ‘creative’ pursuits such as literature. I like to call this brand of logic the Small Wonder syndrome, after one of the worst television soaps ever made, wherein a computer engineer builds a humaniform child-robot for his wife and names it (nopes, not using ‘her’) Vicki. Vicki was the projection of the optimistic-yet-fearful stance that people had towards the nascent field of robotics, back in the 1980s. It is supremely skilled at mechanical tasks. But pretty much every scene in the series is about one joke: the futility of an artificial intelligence trying to approximate human nature.

Conversely, Italo Calvino felt that a lot of flesh-and-blood writers worked like robots, churning out assembly-line bestsellers. And yet, even he conceded that to the consumer, it’s often all the same: they are incapable of differentiating between man and machine. In his novel If on a Winter’s Night a Traveler, there’s a character called Ermes Marana, who claims to have decoded the methods of Silas Flannery, the world’s bestselling mystery novelist. The name of his organisation, OEPHLW (Organisation for the Electronic Production of Homogenized Literary Works), suggests that he is in possession of an algorithm that can be used to ‘write’ original Flannery novels.

Calvino writes: “A team of ghost writers, experts in imitating the master’s style in all its nuances and mannerisms is ready and waiting to step in and plug the gaps, polish and complete the half-written texts so that no reader could distinguish the parts written by one hand from those by another. (...) It seems that their contribution has already played a considerable part in our man’s most recent production.”

The second reason for our fear of Japanese robot novelists is a little more subtle and has to do with a popular fallacy: the notion that linguistic and technological virtuosity are polar opposites. [This is further bolstered by the niggling concern that middle-class parents in this country share: is my child more inclined towards the (mystical, impoverishing, feminine) Arts rather than (pragmatic, lucrative, masculine) Science?]

Nothing, in fact, could be farther from the truth. We now know that linguistic prowess has a high degree of correlation with all kinds of knowledge, really, but especially fuzzy logic. All languages have structures: syntax, rules, bits and pieces that can be taken apart and reverse-engineered. It should come as no surprise that the limits to what an AI can achieve by way of reading and writing are being pushed every day.

Two intuitive programs tasted Twitter notoriety last week. The first was an AI called Tay, a chatbot developed by Microsoft, built to interact with millennials using speech patterns on Twitter. It’s sufficient to say that this linguistic experiment did not end so well: Tay ended up taking a crash course in everything reprehensible on the micro-blogging site. Racism/Holocaust denial? Check. Homophobia and casual sexism? Check and check.

The other star AI on Twitter is a neural network called Deep Drumpf (@DeepDrumpf). A neural network, simply put, is a package of data structures and programs that can study a large amount of text and form patterns of its own. In this case, the MIT research team behind Deep Drumpf seems to have fed the AI with just about everything Donald Trump has ever said on record.

Last week, the AI tweeted: “Here is what’s going to happen, OK? I’ll get rid of the Senate. They don’t know what they’re doing.” And here’s my favourite, a tweet from March 31, which says: “Free trade can be wonderful if you have the power of nuclear weapons.” As people who’ve followed Trump’s madcap march towards the White House can testify, Deep Drumpf could have done far worse.

And we can always do far better: Calvino seemed to think so, at any rate. I’ve been thinking about his bleak view of the reading population lately. He’s not far off the mark, one feels. Those who disagree are welcome to log on to botpoet.com and take one of their “Bot or Not” quizzes, where you are challenged to guess whether a particular poem has been written by a human or generated via an algorithm. I scored a measly four on 10, and I assigned robotic origins to a Gerald Manley Hopkins poem while I was at it.

Do they make robots that can approximate a mixture of guilt and shame?

comment COMMENT NOW