Facebook announced on Tuesday that they would be testing new safety measures to fight revenge porn — by analyzing nude photos submitted by users.
Facebook announced in November 2017 that they would be introducing new methods of fighting the issue of revenge porn using artificial intelligence and photo matching technology, the only issue is that Facebook would need nude photos of users in order for this to work. On Tuesday, the company announced that they have partnered with several groups including the Cyber Civil Rights Initiative and The National Network to End Domestic Violence, according to The Wrap.
An announcement on the official Facebook Safety page titled “People shouldn’t be able to share intimate images to hurt others,” written by the company’s Global Head of Safety, Antigone Davis, reads:
It’s demeaning and devastating when someone’s intimate images are shared without their permission, and we want to do everything we can to help victims of this abuse. We’re now partnering with safety organizations on a way for people to securely submit photos they fear will be shared without their consent, so we can block them from being uploaded to Facebook, Instagram and Messenger. This pilot program, starting in Australia, Canada, the UK and US, expands on existing tools for people to report this content to us if it’s already been shared.
Today, people can already report if their intimate images have been shared without their consent, and we will remove each image and create a unique fingerprint known as a hash to prevent further sharing. But we can do more to help people in crisis prevent images from being shared on our services in the first place. This week, Facebook is testing a proactive reporting tool in partnership with an international working group of safety organizations, survivors, and victim advocates, including the Australian Office of the eSafety Commissioner, the Cyber Civil Rights Initiative and The National Network to End Domestic Violence in the US, the UK Revenge Porn Helpline, and YWCA Canada.
The post explains how the process of users protecting themselves from revenge porn works:
– Anyone who fears an intimate image of them may be publicly can contact one of our partners to submit a form
– After submitting the form, the victim receives an email containing a secure, one-time upload link
– The victim can use the link to upload images they fear will be shared
– One of a handful of specifically trained members of our Community Operations Safety Team will review the report and create a unique fingerprint, or hash, that allows us to identify future uploads of the images without keeping copies of them on our servers
– Once we create these hashes, we notify the victim via email and delete the images from our servers – no later than seven days
– We store the hashes so any time someone tries to upload an image with the same fingerprint, we can block it from appearing on Facebook, Instagram or Messenger
Facebook claims that the pictures shared with them will be deleted from their servers “no later than seven days” after a user uploads it. This program is now being tested in the U.S., United Kingdom, Canada, and Australia.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com
COMMENTS
Please let us know if you're having issues with commenting.