Sometime ago, I helped an old lady book a rail ticket online. As we made the payment, a message popped up along with the e-ticket: “Are you aware that 43 per cent of your fare is borne by the common citizens of the country?” The lady got curious about the message. “This is insulting,” she told me. “Why are they sending such a blanket message to all? It’s the rich they must target, specifically,” she said. I told her it was an automated message. “What kind of automation is this then?” she wondered. “It can’t even figure out who’s eligible and who’s not. I felt like I was robbing someone.”

Elsewhere, consumers of cooking gas often get electronic messages from government agencies requesting them to forsake their subsidy for the collective good. However well-intended, such messages create a sense of uneasiness among their recipients, making them feel they are abusing a privilege. Is this done intentionally? You never know. Will this get better if the government uses better data and profiling technologies to find the ‘eligible’?

Unlikely, as political scientist and teacher Virginia Eubanks finds out in her brilliant work Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor . In fact, they make the problem worse, Eubanks observes, examining how automated eligibility systems discourage the poor from claiming public resources that they need to survive and thrive. Eubanks’ geography of study — the US — is a different one, but the cases she tracks to figure out how data technologies filter out “undeserving” beneficiaries can strike a chord with India today — and more importantly, tomorrow — especially in the context of direct benefits transfer of social subsidies and the use of advanced technologies, including biometrics, which have triggered controversies centred around breach of privacy and state overreach.

Data, the problem-solver?

Eubanks studies three cases: automation of welfare eligibility in Indiana; a project to create an electronic registry of the homeless in Los Angeles; and an attempt to develop a risk model to predict child abuse in Allegheny County. Of these, the Indiana example is striking for the takeaways it has for countries such as India, where authorities fancy implementing similar systems.

Many policy pundits who advocate use of technology to monitor social welfare, even in India, vouch for the ‘impartiality’ and ‘accuracy’ of data when it comes to ‘streamlining’ subsidies and welfare programmes while ‘plugging’ leaks in benefits transfer. But Eubanks illustrates through stunning case studies that complex integrated databases collect the most personal information about the underprivileged and the working poor, with few safeguards for privacy or data security, “while offering almost nothing in return”.

Why and how does this happen? Because data science is not neutral. It is biased. Predictive models and algorithms that tag, categorise and filter people bear all the biases and prejudices of the people who have created, and are using, them. Data scientist Cathy O’Neil, in her seminal work Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy drives this point home impeccably, but Eubanks’ examples are more personal and, hence, more startling and piercing.

Big Brother and us

A popular refrain heard in policy circles while discussing use of unique identification methods, location-based services and profiling technologies in social welfare is that it helps the poor by eliminating middlemen and corruption. Eubanks doesn’t agree. She says this casts a shadow of unwanted surveillance over the vulnerable and the poor. “Vast complexes of social service, law enforcement, and neighbourhood surveillance make their every move visible and offer up their behaviour for government, commercial, and public scrutiny,” she observes. As a result, these ‘informational sentinels’ start determining what the poor can access.

Interestingly, unlike the privileged rich, who use sophisticated technologies and are aware of their privacy rights, the vulnerable poor and the marginalised are forced to give away more data about their movement, identity and activities. Look at India’s case: when cellphone companies ‘wrongly’ interpreted government and court orders and asked their customers to link their numbers to Aadhaar, the first to respond was the working poor and similar underprivileged groups who were worried that their access to social and financial services (for which they use their mobile number as an identifier) would be cut off, while the middle class and the rich resisted the move, which was later quashed by the Supreme Court, forcing the Centre to end its dilly-dallying on the issue.

Eubanks is crystal-clear on this: “Marginalised groups face higher levels of data collection... That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny.” Interestingly, Eubanks says that groups, rather than individuals, are subjected to such processes. Eubanks gives the example of how in 2014, Maine’s Republican Governor Paul LePage used electronic benefits transfer data to track and “stigmatise” poor and working-class people.

In 1984, George Orwell got one thing wrong, Eubanks says. “Big Brother is not watching you, he’s watching us.” Most people are targeted for digital scrutiny as members of social groups, not as individuals, she explains. People of colour, migrants, unpopular religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much higher burden of monitoring and tracking than advantaged groups, she warns, calling such acts “collective red-flagging.”

Wrong calls

Those who have tracked the recent arguments for and against economic austerity can easily spot the link between demands for cuts in social spending (favoured by the likes of IMF and World Bank) in debt-ridden countries such as Greece and policy think-tanks’ push for data-enabled algorithmic solutions to clear discrepancies in welfare distribution and the importance of using artificial intelligence to enhance such decision-making. This is a deadly cocktail of scenarios because that only increases inequality by making the poor poorer and rich richer.

Today, we have ceded much of that decision-making power to sophisticated machines, Eubanks writes. According to her, the economic insecurity of the last decade has been accompanied by an equally rapid rise of sophisticated data-based technologies in public services.

Massive investments in data-driven administration of public programmes are rationalised by a call for efficiency, doing more with less, and getting help to those who really need it, she says. “But the uptake of these tools is occurring at a time when programs that serve the poor are as unpopular as they have ever been.” And this is no coincidence, reasons Eubanks.

“Technologies of poverty management are not neutral. They are shaped by “our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty.” She’s indeed talking about the US today, but also about India tomorrow if we don’t find adequate checks and balances to make hi-tech tools egalitarian and inclusive. Again, that’s a big ask.

MEET THE AUTHOR

Virginia Eubanks is an associate professor of political science at the University at Albany. She is the author of Digital Dead End: Fighting for Social Justice in the Information Age. She is a founding member of the Our Data Bodies Project and a Fellow at New America.

comment COMMENT NOW