Oct. 24 (UPI) — Facebook announced Wednesday that it removed 8.7 million pieces of content that violated its child nudity or child sexual exploitation policies in a three-month span.
About 99 percent of the affected content was removed before any users reported them, it said.
The company said it used artificial intelligence and machine-learning techniques flagging software to detect the images as they were uploaded in the past year. The figure it gave, of 8.7 million pieces of content found worldwide, covered actions taken between July and September. It is an improvement on photo matching, which Facebook has used for years to stop sharing of child exploitation images.
The software is able to “get in the way of inappropriate actions with children, review them and if it looks like there’s something problematic, take action,” Antigone Davis, Facebook’s global head of safety, said.
The use of artificial intelligence can quickly identify content and notify the National Center for Missing and Exploited Children, and close the accounts of Facebook users promoting inappropriate actions with children, CNET reported on Wednesday.
“We have specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review content and report findings to NCMEC,” the company said on Wednesday.
Facebook has historically erred on the side of caution in the past in deleting and reporting inappropriate photos of children. The process has led, in the past, to the removal of photographs of emaciated children taken at Nazi concentration camps, as well as a Pulitzer Prize-winning war photo of a naked Vietnamese girl after a napalm attack.
In the past, though, the company has relied largely on users who flag and report inappropriate images.
The new system allows Facebook to “proactively detect child nudity and previously unknown child exploitative content when it’s uploaded,” it said
COMMENTS
Please let us know if you're having issues with commenting.