You’re kidding, right?

Not at all. Just this week, the social media site launched a pilot programme in Australia, under which it invited users to send it “intimate images” of themselves.

What! Why?

Sounds creepy, right? Particularly because it comes from a web platform whose founder, Mark Zuckerberg, earned his pre-Facebook street-cred by setting up a campus website on which hormonal teenagers could rate their female peers on their level of ‘hotness’, based on their photographs, which he whisked off university domains.

What earthly reason could justify such an invitation from a $500-billion company?

Facebook has said that the initiative, which could be extended to other countries based on the response, is really aimed at protecting users by ensuring that nude photos and other intimate images of them don’t get posted on Facebook, Instagram and other platforms without their consent. There have been many such instances of “revenge porn” attacks, including in India.

I’ve clearly lived a sheltered life: what’s ‘revenge porn’?

Legal scholars Debarati Halder and K Jaishankar define ‘revenge porn’ as “an act whereby the perpetrator satisfies his anger and frustration for a broken relationship through publicising false, sexually provocative portrayal of his/her victim, by misusing the information he may have known naturally and that he may have stored in his personal computer, or may have been conveyed to his device by the victim herself, or may have been stored in the device with the consent of the victim herself; and which may essentially have been done to publicly defame the victim.”

That’s a mouthful.

Essentially, it’s about jilted lovers posting sexually explicit photos of their ex-partners; the photos or videos may have been taken with the partners’ consent, in cheerier times, but posting them on social media without consent is a form of perverse humiliation.

And Facebook wants you to voluntarily share these photographs?

It sure is counter-intuitive, but it works like this. To prevent “non-consensual intimate images” from being posted, Facebook is working along with Australia’s eSafety Commissioner’s Office. If a user feels such images of them may be posted, and if they have a copy of such photographs, they can proactively alert the eSafety Commission Officer, and send such images to themselves on Messenger. A member of Facebook’s Community Operations team then “hashes” the photographs — that is, creates a “DNA” of the photos — and stores the hash-es (not the photos themselves) in the site’s database. After that, the shared image can be deleted from Messenger.

How does all this help?

Any time someone tries to upload an image on Facebook, it is cross-checked through the database of photo-hashes; if there is a DNA match, Facebook bars their posting. This way, Facebook users can pre-empt their own digital ‘humiliation’, but it requires them to trust Facebook by sharing those self-same intimate images with it.

And Facebook can’t be trusted?

Facebook claims it is working alongside survivors and ‘victim advocates’. But the programme works only if the victims have the original images. And sharing such images with Facebook requires levels of trust that the site’s recent experiences don’t engender: the company recently acknowledged that it had allowed itself to be used by Russian propaganda advertising to spread hateful messages to US voters, which resulted in the manipulation of last year’s Presidential election.

The bottomline?

Facebook’s programme is well-intentioned, but if it is distrusted, it has only itself to blame.

A weekly column that helps you ask the right questions

comment COMMENT NOW