People use facts to hold on to what we already believe in. I’m paraphrasing Scottish poet and literary critic Andrew Lang who used the same argument for how people use statistics to support their way of interpreting a situation.

Andrew was given credit for that quote in 1937. What’s not changed since is how people prefer stories and opinions over facts and how powerfully stories travel through information networks, both technical and personal. What has changed is the speed at which information travels, our ability to connect with like-minded others like never before, and our ability to share and snowball stories through communities and networks.

Making meaning

Stories provide meaning to facts. How we hear stories is effected by our beliefs, our value-systems, and our life experiences and accept the bits that match what we want to hear more readily than what doesn’t. We’re all looking for reassurance on our view of being. Social media platforms understand this. We’re more likely to consume and engage with content offered to us that supports our way of thinking.

Algorithms study our online behaviour and give us more of what we want to see — either content uploaded or shared by people within our networks or through sponsored and paid content that aligns with our thinking. The more time we spend on these platforms, the more they’re able to monetise our attention.

In 2016, in a bid to reverse its fortunes and join Facebook in the ranks of social media giants, Twitter shed its cool kid image and introduced an ‘algorithmic timeline’. Instead of seeing every tweet from every person we follow, in chronological order, with the most recent at the top, we now see more tweets from people we interact with the most and more or the most popular tweets from others we follow. It ensures that the most popular tweets are far more widely seen than they used to be, enabling them to go viral on an unprecedented scale.

That’s all good until we see the trade-off. We can’t see more of some kind of tweets without seeing less of others. We’re being insulated from viewpoints that may counter our own. They are no longer social networks, but real-time, personalised news services with no human editors.

A kind of isolation

Danah Boyd, Principal Researcher at Microsoft says, “We’ve built an information ecosystem where information can fly through social networks.” What’s at stake isn’t ‘fake news’, it’s “the increasing capacity of those committed to a form of isolationist and hate-driven tribalism that has been around for a very long time”.

Every time we click on a link with a headline baiting us, we like or share information without double-checking the source or verifying its accuracy, we’re willing participants in this war on information. Once we’ve inadvertently shared a misleading article, image, video or meme, the next person who sees it, who probably trusts us, goes on to share it. We’re helping ‘rocket atoms of propaganda’ through an information ecosystem at high speed powered by trusted peer-to-peer networks. When we see multiple messages about the same topic, our brains use that as a shortcut to credibility.

In India, fake news travels fastest through private Whatsapp groups, making it hard to track and harder to disarm. They whistle through smartphones without any need to register or log in to use the service. What makes India’s problem more dangerous is the propensity of propaganda to incite communal violence.

We’re in a war against fact-free news. When we’re fed information that appeals to our emotions, it polarises us. The next time we read something that makes us feel good, try and question not just the information but also why it appeals to us.

When we see inaccurate information on Facebook, try talking to them to understand what they’re thinking rather than simply presenting an alternative point of view that’s going to further polarise beliefs.

The writer is managing partner of On Purpose, a communications consultancy

comment COMMENT NOW