By Wendy Shore on April 15, 2021

IA Blog: Why our digital world demands automated content moderation

By Crispin Pikes, CEO of Image Analyzer

Manual content moderation is as old as graffiti. Wherever there is a public platform, some people will choose to adorn it, others will deface it. Since the birth of the internet, human moderators have continuously worked to remove objectionable content posted within online forums to ensure user safety of online communities, in the same way that school janitors have wearily scrubbed off obscenities scrawled in bathroom cubicles.
In the last 15 years however, the explosion of social media use, combined with ubiquitous smartphone ownership and increased connectivity, have permanently changed this balance. Where user comments and uploads to discussion boards, internet forums, gaming sites and blogs used to be managed by a webmaster, or team of moderators, the sheer scale of user-generated content (UGC) enabled by the camera and word processor in everyone’s pocket has made manual moderation unsustainable.

Let’s just consider some social media statistics for a moment:
• There are 300 million posts to Facebook each day, of which 35.73% are photographs and 15.09% are videos.
• 95 million photographs are uploaded to Instagram daily.
• 350,000 hours of video are streamed on Periscope every 24 hours.
• 500 hours of video are uploaded to YouTube every minute.
• There are 456,000 Tweets a minute.
• It has been estimated that it would take someone 950 years to watch all of the Snaps that are shared on SnapChat every 24 hours.

A big concern and challenge for organizations today is that a small percentage of these posts are at best offensive, and at worst abhorrent and likely to drive people off your site for good. To combat the problem and keep the most toxic content off their sites, the largest social media sites have outsourced their content moderation to contractors who employ many thousands of content moderators to monitor posts and flag the worst content for removal. However, even these huge, dedicated teams struggle to keep up with the volume of content being uploaded.

For smaller sites and fledgling online communities, the issue of managing content moderation as they attract more users and scale their operations has become a strategic issue for the C-suite. The case of Parler vs. Amazon Web Services, made it clear that failure to moderate users’ posts can jeopardize an organization’s survival.

When we consider that, for example, Germany’s Network Enforcement Act (NEA) requires Social Network Providers, that have more than 2 million registered users in Germany, to remove ‘manifestly unlawful’ content within 24 hours of receiving a complaint, the issue becomes clear. Millions of daily posts can no longer be checked by humans alone. Backlogs can result in severe delays in removing harmful content and risk user safety and penalties for non-compliance.

Image Analyzer’s technology was developed to complement the work of human moderators by automatically removing more than 90% of harmful and manifestly illegal posts from the moderation queue, so that it never reaches your site, leaving only the more nuanced content for human review. We hold US and European patents for our AI-powered content moderation technology, Image Analyzer Visual Intelligence Service (IAVIS), which identifies visual risks in milliseconds, with near zero false positives. Organizations use IAVIS to automatically moderate previously unseen images, video and live-streamed footage uploaded by people misusing their access to digital platforms.

By 2022 InterDigital/Futurescope anticipate that videos will make up 82% of consumer internet traffic. When people upload images or live-streamed footage depicting, or inciting graphic violence, illegal pornography, or child abuse, this material pollutes online communities and spills over into real world harms.

IAVIS helps organizations to combat workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage. By automatically removing more than 90% of harmful visual material, It enables organizations to uphold brand values and aids compliance with user safety regulations requiring the timely removal of harmful visual material. Its ability to instantly detect visual threats within newly created images and live-streamed video prevents them from being uploaded to digital platforms where they could cause further harm to other site users, particularly children.

Applying AI-powered visual content moderation that is trained to identify specific visual threats, IAVIS gives each piece of content a risk probability score and speeds the review of users’ posts. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. IAVIS can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.

Organizations are striving to maintain user safety and protect their online communities from harmful visual material to create a positive user experience and comply with impending laws. We are here to help you.

To discuss your content moderation needs, please email, or request a demo.

Dustin Stout, ‘Social Media Statistics 2021 – Top Networks by the Numbers’
Computer Weekly: ‘AWS urges US court to throw out Parler breach of contract lawsuit’, January 14th 2021
Act to Improve Enforcement of the Law in Social Networks, Network Enforcement Act, Germany, July 12th 2017
Streaming Media: ‘Video will grow to 82% of internet traffic by 2022: InterDigital/Futuresource Report,’ December 3rd 2020,than%201%25%20of%20global%20emissions
InterDigital Futuresource report: ‘The sustainable future of video entertainment, December 3rd 2020

Published by Wendy Shore April 15, 2021