Social Media photo
SHARE

Yesterday, Mark Zuckerberg announced that Facebook is hiring 3,000 people to work on its community operations team, which reviews images, videos, and posts that users report. These new hires will join the 4,500 existing employees in an effort to minimize the reach of future events like the shooting of Robert Goodwin. It’s a considerable-but-essential investment for Facebook, but it leads us to a basic question: Can’t this job be automated?

The scale of this labor is vast: Facebook is hiring more people than work in the combined newsrooms of the New York Times, the Wall Street Journal, and the Washington Post. Facebook isn’t saying at this time if the jobs will be employees or contractors, and if they’ll be based in the United States or abroad. (Curious about what that work is like? Read Adrien Chen’s dive into the content moderation from 2014, or check out this video on the moderation process)

“These reviewers will also help us get better at removing things we don’t allow on Facebook like hate speech and child exploitation,” wrote Facebook CEO Zuckerberg. “And we’ll keep working with local community groups and law enforcement who are in the best position to help someone if they need it — either because they’re about to harm themselves, or because they’re in danger from someone else.” (emphasis added)

The subtext is the April 16 murder of Robert Goodwin in Cleveland. According to a timeline released by Facebook, at 2:09PM Eastern on April 16 a man uploaded a video stating his intent to commit murder. Two minutes later, the man uploaded the video of him shooting Goodwin. Shortly after that, the murder suspect posted live on Facebook for five minutes. The video of the shooting was reported at 4PM, and by 4:22 Facebook suspended the shooter’s account and removed his videos from public view. That’s just over two hours between initial post and account suspension, but the ubiquity of Facebook and the horrific nature of the murder made it national news.

There is some content that Facebook preemptively censors as it’s uploaded. The company uses automated systems to find previously removed nude and pornographic photos when they’re uploaded a second time. The social network also employs PhotoDNA, a tool that cross-references images against a database of known markers of child exploitation photos, so that the photos can be blocked and reported to appropriate legal authorities. In both of these cases, the automation checks against previously known quantities.

Most of the automation in the process assists manual enforcement by human content reviewers. If a piece of content is reported 1,000 times, the tools recognize duplicate reports, so it only has to be manually reviewed once. Other tools direct pictures, posts, or videos to reviewers who have specific expertise; for example, someone who speaks Arabic could review flagged content from an extremist group in Syria that might violate Facebook’s terms of service.

Still, it’s human judgement ability to understand context that rules the day. To determine if a comment is hateful or bullying, Facebook relies on real people. And in the specific case of Facebook Live, there’s a team dedicated to monitoring reports on live video; the group automatically monitors any live video that gets to a certain, unstated popularity threshold.

There is one area where Facebook is giving AI a more-active role. In March, the network launched a suicide-prevention AI tool, which identifies posts that look like they might indicate suicidal thoughts. BuzzFeed reports that the AI can scan posts and comments for similarities to previous posts that warranted action. In rare circumstances, it will directly alert moderators, but more often it will show users suicide prevention tools like the number for a helpline. In addition, the AI makes a button for reporting self-harm more-prominent to the person’s friends, increasing the likelihood that they’ll flag the video for Facebook’s human moderators. (The company declined to comment on the record for this story.)

The immediate alternative to human moderation is likely preemptive censorship.

“Do we want a more-censored Facebook?,” said Kate Klonick, a resident fellow at the Information Society Project at Yale. “When you’re putting in place an overly robust system, the potential is that people aren’t trained correctly and there will be a lot more false positives and a lot more censorship.”

Klonick’s work focuses on Facebook and other platforms as systems that govern online speech. Facebook has to make active choices in how it regulates what gets posted and stays posted, argues Klonick, and there is no algorithmic magic bullet that can change this from a problem of speech into simply a technical challenge.

“We are years from being able to have AI that could solve these complex decision-making problems,” said Klonick. “It’s not just photo recognition, it’s photo recognition on top of decision making on top of categorization—which are all difficult cognitions problem, which we are nowhere near figuring out reliably.”

The challenge of photo recognition is so iconic in computer programming that it’s become something of a parable itself. In 1966, Seymour Papert at MIT proposed what he thought would be a simple summer project: train a computer to recognize objects. Rather than finishing in a summer, the field of training computers to recognize objects is a monumental task, that continues to this day, with companies like Google and Facebook pouring money and hours into researching a solution.

“For the foreseeable future,” says Klonick, “It is not an AI problem or an AI solution.”