Liberal democratic theory considers political choice as akin to shopping. Political parties compete for votes in the free market of the election and those judged to meet the sovereign demands of the electorate, end up winners.

Voting decisions happen in direct encounters between the voter and the world of information, where every little nugget has its utility. As a rational actor, the voter will sort through a torrential flood of information and find a way towards a voting decision in his or her best interests.

There is, of course, the problem of self-serving information that a contestant in the political realm may present to a gullible voter population, all too prepared to believe any absurdity. The rational actor model rules that out, simply because — to employ a tautology — actors are rational.

Why then this kerfuffle about Facebook? Sovereign consumers signed up for its services after reading the attached terms and conditions. With the licence available from this agreement, Facebook packaged its consumer, as the early 20th-century economist Thorstein Veblen memorably put it, into a “hedonistic” individual, a “lightning calculator of pleasures and pains” and a “homogeneous globule of desire”. Despite being victim of “circumstances external and alien”, he or she could be counted on at all times, to make a rational decision.

The incipient discipline of the sociology of the internet gives it a name: confirmation bias. A decision is rational within any individual’s terms of reference, if it conforms to existing predispositions.

A whistleblower at a shady data mining firm, Cambridge Analytica, recently exposed Facebook’s complicity in a mood-altering experiment that breached customer privacy and possibly caused a perverse electoral outcome in the US in 2016. In 1957, American journalist and social commentator Vance Packard published a classic book titled The Hidden Persuaders , identifying the advertising industry as the unseen accessory of corporate power, working its influence subtly on the mind of the consumer.

One of Packard’s major concerns was about the manner in which, during the 1956 presidential election in the US, democratic choice was manipulated by carefully constructed advertising images. Packard was unimpressed by this manner of advertising, premised upon the “depth approach” which mined into the human psyche to exploit its deepest vulnerabilities. In the years that followed, the power of the media to determine electoral outcomes rapidly increased, often with organised corporate power driving choices through advertising.

In 2014, a data scientist at Facebook was found in collaboration with Cornell University, to have mined the personal information of a massive sample of users to understand the phenomenon of “mood contagion”. The results were published in the pedigreed Proceedings of the National Academy of Sciences with the confident declaration that all applicable norms in use of personal data were honoured.

The object was to study if “emotional states” were transferred via “contagion”. The methodology was explained in detail, including criteria to determine the “valence” of a mood, its positive or negative character. But the ethics of the process did not merit the same elaboration: merely the claim that “it was consistent with Facebook’s data use policy, to which all users agree prior to creating an account”. That act of registration on Facebook, the social media corporation said, constituted “informed consent”.

The results, gratifyingly for the researchers, showed that “emotional contagion” was real and existent: “For people who had positive content reduced in their News Feed, a larger percentage of words in status updates were negative and a smaller percentage were positive”. This finding was confirmed by an “opposite pattern” prevailing when “negativity was introduced”.

It is commonplace observation that an unyielding barrage of negative news tends to depress, while a steady diet of positive “feel-good” information could elevate. The more enduring contribution from Facebook’s research with Cornell may well have been unintended, though one that dawned late.

It was only after a month of slumber that editors in the academy journal woke up to the ethical implications. In an “expression of concern and correction”, the journal pointed out that applicable guidelines in experiments involving human subjects required “informed consent” and an “opt out” clause. These were “best practices in most instances” and though adherence to this “common rule” was the policy of the academy, Facebook as a private enterprise, “was under no obligation to conform”.

Meanwhile, mainstream media commentary painted a troubling picture of a social networking platform that “in exchange for an admittedly magical level of connectivity”, was taking over entire lives “as content”. Privacy settings were available for everybody who signed up, as also a degree of choice in the information Facebook would be authorised to make available to advertisers.

Business enterprises were interested in tapping into the database of buying intent that Facebook was assembling. The Facebook business model was built on selling that intent to any among a number of entities, and the corporate interest was typically hidden in the small print.

Given the spurious symmetry introduced between voting choices and shopping, there is no reason why Facebook’s pursuit of advertising gold should be viewed differently in the two realms. The fiasco does not represent a clear and present danger to liberal democracy. Far from it, the debacle brings to light some of the logical fallacies and absurdities in its basic premises.

BIO-SUKUMARjpg

Sukumar Muralidharan

 

Sukumar Muralidharanteaches at the school of journalism, OP Jindal Global University, Sonipat

comment COMMENT NOW