Suicide and technology: Partners in crime? bl-premium-article-image

Mala Bhargava Updated - September 13, 2018 at 12:00 AM.

Technology brings a new twist to suicide, but it also makes it easier to prevent it

Live streaming of suicides is a worrying phenomenon istock

There’s a gruesome new fashion taking root in our society today. High-tech suicide. And just as a model would walk the ramp showing off a swirling dress, someone shows off taking their own lives on camera, for the world to see.

This June, a Bengali woman hung herself and streamed the event live on Facebook. She seems to have been aided by a boyfriend or as the reports go, she did it ‘for him’ and while he watched. Elsewhere, a girl turned on the camera and threw herself under a train. One can debate the reasons why anyone would want to be so strangely exhibitionistic about ending their lives, but the inescapable fact is that technology has brought an unfortunate new dimension to suicide.

It’s easy enough to lay the blame squarely on new technology. Not only have people live-streamed their suicides, but mysterious dark forces have snared vulnerable teens into killing themselves through ‘challenges’ like the BlueWhale challenge and now the Momo challenge. Young people are persuaded to play along and go through a series of tasks ending in death. Currently, YouTube is receiving some flak for being linked to the appearance of the hideous Momo game on its platform, though it began on Whatsapp and has also transitioned to the game Minecraft, popular with kids.

It isn’t just games but outright cyber bullying that can also result in suicide, as we all know well. Social media is now a new factor in making harassment, stalking and pressure more easy to occur. Facebook has seen more than its share of suicide-related content, from users hinting at harming themselves to outright live broadcasts. And so for many years now, the social network has had a method of letting others flag posts that they think may contain warning signs. If users miss these signs or choose not to take action for fear of being too personal, now there are algorithms that pick up on posts that look suicidal. The AI then hands off to human reviewers who vet them and contact helplines, first responders or families. But the initiative needs to be as strong everywhere in the world as it is in America.

Tanmay Bakshi, the genius kid who is now part of IBM, spoke about how artificial intelligence and deep learning can be used to prevent suicide in depressed teens. Being a 14-year-old himself, he believes suicide helplines are not the answer to getting help for at-risk teens, pointing out that it feels dated and uncomfortable for today’s youngsters.

Help from AI

Depressed teenagers typically give off clear warning signs through their behaviour. Unfortunately, human beings are not all that good at picking up on these signals. These could include changes in sleep patterns and appetite, a drop in social interaction or a reduction in activity level. A smartwatch or phone could help detect these patterns from where a neural network could pick up on them. Tanmay points out in a TED Talk that a perfect example of how our behaviour is already being monitored and used by AI is targeted advertising. “You’re shopping online and the next thing you know, that pair of shoes you Googled yesterday are now following you everywhere today — on email, Facebook and even Linkedin. It’s like they’re stalking you,” he says. “What you may not know is that if I’m a depressed teen and have suicidal thoughts, guess what I see? Nothing at all.”

Talking of how these signals are going waste for suicide prevention, he suggested that AI can easily detect life changes that are relevant and bring help to the person at risk, whether it is by suggesting an action or alerting the family. Irregularities in a person’s health data, social media activity and much more can put together a picture of mental health faster than humans can.

Suicide prevention must be proactive enough to spot early signs of depression, not just imminent suicide attempts. Wysa, a chatbot created by an Indian startup led by Jo Aggarwal, tries to be there for those who need someone to talk to, whether it’s 4 am or in the middle of a busy day. The bot is based on artificial intelligence but has a team of humans behind it. Other than being a listening and helping board, it can kick in preventive measures. It’s an example of how technology can help as much as it can abet suicide. Technology, after all, is neutral. It is only as good or as bad as people make it.

The flipside

But leading Delhi psychiatrist, Dr Alok Sarin is wary of technology being used to monitor people on an everyday basis and have algorithms make their own decisions about where ‘normal’ leaves off and mental ill-health begins. “I’m very uncomfortable with the idea of artificial intelligence trying to identify teenagers who are possibly depressed — or not,” he says. “The intrusion into the privacy of those who turn out not to be thinking of suicide or being depressed is a frightening prospect.” Since the AI identification will lead to action being taken, it can turn out to be a disruptive experience for those who didn’t need help.

Published on September 12, 2018 15:24