By Wendy Shore on August 08, 2019

The human cost of content moderation?

Some people would argue that spending hours of every day in a darkened room being paid to watch images and videos sounds like a great job, but the reality is much different.

An interesting recent article from Dale Carruthers in The London Press highlights a study by Professor Sarah Roberts which has analysed the effect on humans of having to view inappropriate content as part of a solely human content moderation team. Her findings show that the workers in this environment, frequently low paid, suffer from flashbacks and as the Professor observes

“They weren’t walking away unscathed,”

With the advent of new technology such as Image Analyzer it does not have to be this way. Now advanced software can be used in-conjunction with human teams to share the load by scanning huge volumes of images and highlighting those requiring manual review. This can relieve some of the burden upon those employed in this rapidly growing industry. Accuracy rates of the image scanning software have increased dramatically making it a logical choice for any system.

Some would argue against any form of moderation or censorship, but whilst it exists surely companies should use cost effective technology wherever possible to reduce the effects on humans.

Full article follows:

Western University professor researching Internet watchdogs

By Dale Carruthers, ‘The London Free’ Press Thursday, March 27, 2014

They work anonymously to scrub the Internet of the most graphic and disturbing material.

Images and videos depicting child pornography, war-zone atrocities and animal abuse are some of the things they’re paid to view on a regular basis.

They look at it so we don’t have to.

But who are these digital do-gooders?

They’re Internet content reviewers, the faceless workers who screen user-generated content before it goes online and vet material flagged as inappropriate already up on websites.

Western University professor Sarah Roberts has spent more than three years researching the secretive underworld of what she’s dubbed the commercial content moderation industry.

Tech giants from Facebook and Twitter to YouTube employ content reviewers, either in-house or outsourced, in a never-ending battle to keep their sites free of illicit material.

“The work itself is unpleasant but the fact that there’s a necessity for that work is sort of an unpleasant reality related to social media platforms,” said Roberts, a professor in the faculty of information and media studies.

The content reviewers Roberts has interviewed reported experiencing lasting effects from viewing disturbing material, from flashbacks and substance abuse problems to being unable to talk about their work with friends and family.

“They weren’t walking away unscathed,” Roberts said.

Most content reviewers sign non-disclosure agreements and the companies that employ them refuse to talk about the unpleasant nature of the work, so much remains unknown about the industry, said Roberts, who was recently spoke about her research at UCLA.

With the rise of the Internet and explosion of social media, the content moderation industry is only expected to grow. Some companies have tried using software to do the work, but there’s no match for the human eye.

Roberts found many of the workers, typically young and underpaid, were proud of the work, with some describing their duty as keeping the Internet from becoming a “cesspool.”

“They felt like, ‘We’re doing this work, so that you don’t have to see what we see.’ “

Published by Wendy Shore August 8, 2019