image-analyzer-blog

AI Protects Content Moderators' Mental Health - Image Analyzer

Written by Wendy Shore | Apr 29, 2021 10:47:53 AM

April 28th  2021 is World Day for Safety and Health at Work, which began in 2003 and has previously been associated with promoting occupational safety for employees working in environments where they face physical endangerment and risk exposure to harmful substances.

Organized by the International Labour Organization (ILO), this year, the ILO has released a report assessing the impact of the pandemic on the workforce and examines how countries can implement resilient occupational safety and health systems that minimize risks for workers in the event of future crises.

In addition to assessing how to protect workers from viral transmission, the report includes the rising mental health risks facing workers in digital environments as a result of the pandemic and states, “while teleworking has been essential in limiting the spread of the virus, maintaining jobs and business continuity and giving workers increased flexibility, it has also blurred the lines between work and private life. Sixty-five per cent of enterprises surveyed by the ILO and the G20 OSH Network reported that worker morale has been difficult to sustain while teleworking.”

How does operational safety and health relate to online content moderation?

Without an army of international content moderators protecting the digital platforms that we have depended on throughout the pandemic for work, groceries, human connection, entertainment and learning, many of these platforms would quickly become too toxic to use.

The ILO Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, published in February 2021, drew on research from 12,000 workers around the world and examined the working conditions of digital platform workers in the taxi, food delivery, microtask, and content moderation sectors.

The ILO found that there is a growing demand for data labelling and content moderation to enable organizations to meet their corporate social responsibility requirements. Page 121 of the report states, “Some of the companies offering IT-enabled services, such as Accenture, Genpact and Cognizant, have diversified and entered into the content moderation business, hiring university graduates to perform these tasks (Mendonca and Christopher 2018).”

“A number of “big tech” companies, such as Facebook, Google and Microsoft, have also started outsourcing content review and moderation, data annotation, image tagging, object labelling and other tasks to BPO companies. Some new BPO companies, such as FS and CO, India, stated in the ILO interviews that content moderation not only provides a business opportunity but also allows them to perform a very important task for society as they “act as a firewall or gatekeeper or a watchdog for the internet.”

However, these gatekeepers are becoming increasingly overwhelmed by the sheer volume of content uploaded daily.

According to Statista, in a single minute 147,000 images are posted to Facebook, 500 hours of video are uploaded to YouTube and 347,222 stories are posted to Instagram.

A small percentage of these images are at best offensive, and at worst abhorrent and risk creating a highly toxic environment that drives law-abiding users off the platforms. As the ILO report found, the largest digital platforms outsource their content moderation to contractors who employ many thousands of content moderators to flag the worst content for removal.

Facebook employs 40,000 content moderators worldwide. However, even these dedicated teams struggle to keep up with the sheer volume of uploads. Moderators are suffering post-traumatic stress disorder after being exposed to an overwhelming number of harmful images and videos daily.

At Facebook’s scale, even 1% of illegal content demands the removal of 1,470 images a minute, 88,200 images an hour, or 705,600 images in an average 8-hour shift. This is simply impossible for human moderators to cope with. In the age of automation, it’s particularly disheartening to learn that human moderators are expected to work like robots and achieve 98% accuracy by watching a never-ending stream of toxic images.

Online safety legislators are also putting pressure on platform providers to swiftly remove unlawful content before they can do harm. In the US, Section 230 of the Communications Decency Act 1996, is currently under review owing to the legal protections that it affords online service providers for the content posted by third parties. Europe and the UK are proposing new laws to make digital platform operators liable for the rapid removal of user-generated content that could be deemed harmful to other site users or the wider public, with penalties for non-compliance ranging from 6% to 10% of global turnover. Germany’s Network Enforcement Act (NEA) requires Social Network Providers, that have more than 2 million registered users in Germany, to remove ‘manifestly unlawful’ content within 24 hours of receiving a complaint.

The problem is undeniable, vastly outnumbered by uploaded content, the huge backlogs of toxic material can cause human moderators intense stress and burnout.

How AI protects moderators’ mental health

Some social media, community and other digital platform providers are turning to artificial intelligence to automate and scale up content moderation to protect users and support their human moderator teams. AI-powered content moderation can remove explicit, abusive, or misleading visual content from digital platforms such as social media and online games. It automatically scans and removes content that has a high-risk score for illegal, offensive, or harmful material, so that it never reaches the moderation queue. Where the content score is ambiguous, AI can flag it for review by a human moderator.

Image Analyzer is a member of the Online Safety Tech Industry Association (OSTIA). We hold US and European patents for our AI-powered content moderation technology, Image Analyzer Visual Intelligence Service (IAVIS), which identifies visual risks in milliseconds, with near zero false positives. IAVIS automatically screens out more than 90% of illegal and harmful videos, images and live-streamed footage, leaving only the more nuanced images for human review.

IAVIS helps organizations to combat workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage. By automatically removing harmful visual material, it enables organizations to uphold brand values and aids compliance with user safety regulations requiring the timely removal of harmful visual material. Its ability to instantly detect visual threats within newly created images and live-streamed video prevents them from being uploaded to digital platforms where they could cause further harm to other site users, particularly children.

Applying AI-powered visual content moderation that is trained to identify specific visual threats, IAVIS gives each piece of content a risk probability score and speeds the review of users’ posts. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. IAVIS can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.

Organizations use IAVIS to automatically moderate previously unseen images, video and live-streamed footage uploaded by people using or misusing their access to digital platforms, so that manifestly illegal content never reaches your site, or your moderation queue.

If your organization is striving to maintain worker safety, protect your online community from harmful visual material and comply with impending laws, we are here to help you.

To discuss your content moderation needs, please email us, or request a demo.

References:

International Labour Organisation, World Day for Safety and Health at Work 2021, https://www.ilo.org/global/topics/safety-and-health-at-work/events-training/events-meetings/world-day-safety-health-at-work/WCMS_769834/lang–en/index.htm

International Labour Organization, Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, February 23rd 2021, https://www.ilo.org/wcmsp5/groups/public/—dgreports/—dcomm/—publ/documents/publication/wcms_771749.pdf

Statista, ‘New user-generated content uploaded by users per minute,’ January 2021 https://www.statista.com/statistics/195140/new-user-generated-content-uploaded-by-users-per-minute/

Coleman Legal Partners, ‘Moderators to take Facebook to court for psychological trauma,’ October 1st 2019,  https://colemanlegalpartners.ie/moderators-to-take-facebook-to-court-for-psychological-trauma/

Foxglove, ‘Open letter from content moderators re: the pandemic,’ November 18th, 2020 https://www.foxglove.org.uk/news/open-letter-from-content-moderators-re-pandemic

Foxglove, ‘Foxglove supports whistle-blowing Facebook moderators’, December 4th 2019, https://www.foxglove.org.uk/news/whistle-blowing-from-social-media-content-moderators

Vice, ‘Why Facebook moderators are suing the company for trauma,’ December 3rd 2019 https://www.vice.com/en/article/a35xk5/facebook-moderators-are-suing-for-trauma-ptsd

Daily Mail, ‘Dozens of moderators sue Facebook for severe mental trauma after being exposed to violent images at work,’ February 28th, 2021, https://www.dailymail.co.uk/news/article-9308337/Dozens-moderators-sue-Facebook-severe-mental-trauma-exposure-violent-images-work.html

Independent.ie, ‘Facebook moderators sue over trauma of vetting graphic images,’ December 6th 2020, https://www.independent.ie/irish-news/news/facebook-moderators-sue-over-trauma-of-vetting-graphic-images-39830414.html

The Irish Times, ‘Facebook: US settlement does not apply to Irish cases,’ May 13th, 2020,https://www.irishtimes.com/business/technology/facebook-us-settlement-for-moderators-does-not-apply-to-irish-cases-1.4252611

Daily Telegraph, ‘Moderators sue Facebook for unrelenting exposure to disturbing content,’ December 5th 2019,https://www.telegraph.co.uk/technology/2019/12/05/moderators-sue-facebook-unrelenting-exposure-disturbing-content/

The Guardian, ‘Ex-Facebook worker claims disturbing content led to PTSD’, December 4th 2019, https://www.theguardian.com/technology/2019/dec/04/ex-facebook-worker-claims-disturbing-content-led-to-ptsd

Cornell Law School, Legal Information Institute, 47. U.S. Code, Section 230, Protection for private blocking and screening of offensive material’ https://www.law.cornell.edu/uscode/text/47/230

Tech.co, Section 230 explained, April 23rd 2021, https://tech.co/news/section-230-explained

Image Analyzer, Online Safety Legislation, https://image-analyzer.com/online-safety-legislation/