A recent article in the Wall Street Journal describes the awful work that human reviewers perform as they hunt for pornography, racism and violence in posts on social media.
The article, The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook, begins by quoting Sarah Katz, a content moderator for Facebook, Inc, saying she saw anti-Semitic speech, bestiality photos and video of what seemed to be a girl and boy told by an adult off-screen to have sexual contact with each other.
Ms. Katz, 27 years old, says she reviewed as many as 8,000 posts a day, with little training on how to handle the distress, though she had to sign a waiver warning her about what she would encounter. Coping mechanisms among content moderators included a dark sense of humor and swiveling around in their chairs to commiserate after a particularly disturbing post.
The article goes on to say that deciding what does and doesn’t belong online is one of the fastest-growing jobs in the technology world—and perhaps the most grueling. The equivalent of 65 years of video are uploaded to YouTube each day. Facebook receives more than a million user reports of potentially objectionable content a day.
UCLA Professor Sarah Roberts estimates that tens of thousands of people work in such roles as content moderators. Several former content moderators at Facebook say they often had just a few seconds to decide if something violated the company’s terms of service. A company spokeswoman says reviewers don’t face specific time limits.
Our mission is to promote civil, decent and ethical content and dialogue in digital spaces. To that end, we created SafeSocial, a platform that uses AI to protect your brand’s image in the constantly changing space of social media and popular culture. By design, we intend to use technology wherever possible to reduce the amount of objectionable content that human moderators have to witness.