Facebook: AI can detect 94.7% of hate speech deleted by the platform


Facebook announced on Thursday that AI software can now detect 94.7% of hate speech deleted from the platform.
Mike Schroepfer, Facebook’s chief technology officer, revealed the figure in a blog post, adding that the proportion was 80.5% a year ago and 24% in 2017. The number is also mentioned in Facebook’s latest community standards implementation report.
Social media companies such as Facebook and twitter are often criticized for failing to remove hate speech (including racial libel, religious attacks, etc.) from their platforms.
These companies rely on thousands of content censors around the world to manage posts, photos and videos shared on their platforms. More than 200 Facebook censors said in an open letter to Zuckerberg on Wednesday that the company forced them to return to work during the outbreak of the disease, regardless of their lives.
But human censors alone are not enough. Today, technology giants increasingly rely on artificial intelligence, machine learning that automatically improves algorithms from experience.
“One of Facebook’s core concerns in AI is the deployment of advanced machine learning technology to protect people from harmful content,” said sripfield.
“With billions of people using our platform, we rely on artificial intelligence to expand our content auditing and, where possible, automate content processing decisions,” he said. “Our goal is to identify hate speech, false information, and other violations as quickly and accurately as possible for every form of content and every language and community around the world The content and form of the platform policy. ”
But Facebook’s AI software still finds it hard to find content that violates the policy. For example, it is difficult for the software to recognize the meaning of pictures and texts, and the recognition of irony and slang is not always accurate. But in many cases, humans can quickly determine whether a piece of content violates Facebook’s policy.
Facebook said it recently deployed two new AI technologies to address these challenges. The first is called the enhanced integrity optimizer (Rio), which can learn from real-world online examples and metrics rather than using offline datasets; the second is an artificial intelligence architecture called linformer, which allows Facebook to use complex language to understand models. Previously, the model was too large to be used on a large scale.
“We are now using Rio and linformer in our products to analyze content on Facebook and instagram in different parts of the world,” said sripfield.
Facebook also said it had developed a new tool to detect deep fakes and make some improvements to its existing system, simsearchnet. The system is an image matching tool designed to detect false information on the platform.
“All of these innovations put together mean that our AI systems can now have a deeper and broader understanding of content,” said sripfield. “They are now more sensitive to what is shared on the platform, and they can learn more quickly when new hot words and images emerge and spread.”
The challenges Facebook faces are “complex, subtle, and growing rapidly,” Mr. shripfeld added. Mistakenly labeling content as hate speech or false information “hinders people’s ability to express freely,” he added.