Bullying, stalking, body-shaming and such other attacks on young girls are not confined to the physical world but are routine in the cyber sphere, too.

Vasudeva Varma, Head of Information Retrieval and Extraction Lab at the International Institute of Information Technology, Hyderbad (IIIT-H), says that one reason for 35-40 per cent of pre-teen girls feeling low and unhappy is the time they spend in the cyber world.

“About 20 per cent of them are clinically depressed, necessitating medical intervention,” he adds.

Leveraging NLP tools

To help them fight the cyber world toxicity, IIIT-H has launched Project Angel that seeks to use NLP (Natural Language Processing) tools to build a ‘resident angel’ for their smart devices. “It will be in the form of a regular user of a social media platform.

“When you follow it, it will look for objectionable content categorised as bullying or stalking with objectionable language, and will alert you,” he says.

“It will step in to protect the girls from online toxicity and steer them towards positivity with appropriate reading recommendations,” he says.

The researchers have begun working on fundamental building blocks such as detecting social biases online or toxicity in the form of body-shaming, the presence of echo chambers and sexual harassment.

“We are essentially building NLP toolsets or models to understand the language of teens across continents. In the first phase, the tool will smart-read the content and classify and categorise it,” he says.

Positive messages

The angel tool would look for positive vibes in the cyber world, capture and present them to the user. “What this means is that if positive messages are found on Twitter, we try to transfer that into an Instagram message so that the message will come from the ‘angel’ present on the Instagram network,” Varma says. With an anthropologist and a researcher at Adobe, whose field of interest is affective computing (a branch of computing that understands and develop systems that can interpret, process and stimulate emotions), on board, this multi-disciplinary project seeks to bring in novel insights and analyses of not just language in social media, but also images posted and shared online.

Researchers at IIIT-H are working on identifying toxic content online, which includes hate speech and sexist diction. The deep neural networks developed at the lab could detect online sexist comments, label and categorise them.

Nimmi Rangaswamy, a ‘human-computer interactions’ anthropologist, says she is focussing on Instagram, which is considered the place-to-be for the young netizens.

Her students are following a curated set of influencers on Instagram and trying to analyse their posts and the type of comments they attract.

 

comment COMMENT NOW