By Crispin Pikes, CEO of Image Analyzer
Organizations face increasing moral, legal and financial pressure to moderate user-generated content that is uploaded to their digital platforms.
Content moderation has traditionally been associated with keeping corporate networks and online communities free of pornography, grooming and cyberbullying. However, recent years have demonstrated the wider harms to user safety, public health, national security and democracy caused by images, videos and disinformation shared online.
In the US, Section 230 of the Communications Decency Act 1996 currently shields digital platform operators from being prosecuted for the content that their users upload and share. Both Republicans and Democrats are urgently seeking to revise this law.
The insurrection at Capitol Hill on January 6th made it abundantly clear that unchecked online content can cost lives. Three years before the tragic events at Capitol Hill, a UN report described how Facebook’s platform was used to incite violence against thousands of Rohingya people in Myanmar.
Elsewhere in the world, the risk of litigation for failure to remove harmful online content is also increasing. Germany’s Network Enforcement Act (NEA) already requires Social Network Providers, that have more than 2 million registered users in Germany, to remove ‘manifestly unlawful’ content within 24 hours of receiving a complaint.
Europe and the UK are proposing new laws to make digital platform operators liable for the rapid removal of user-generated content that could be deemed harmful to other site users or the wider public, with penalties for non-compliance ranging from 6% to 10% of global turnover.
While it is tempting to think that impending legislation will only apply to the largest social networks, the new laws will bring many more organizations into scope. As an example, LEGO recently announced its LEGO VIDIYO™ service, a partnership with Universal Music that enables children to make and upload their own music videos. This user-generated content requires moderation to ensure that adequate protections are provided to minors using the service.
In the UK, the Online Safety Bill includes proposals to categorise companies based on the number of people using their platforms. Large social media sites are likely to be classed as Category 1 companies and will have greater obligations to moderate content uploaded by billions of users. Category 2 companies will include any interactive community platform that promotes the upload of user-generated content, such as travel sites, online gaming, dating, private messaging and retail sites. Failure to adequately moderate content could result in companies being fined up to 10 percent of their global turnover, or having their services blocked in the UK.
While the prospect of additional online user safety legislation can appear daunting, there are many business benefits to be achieved by creating a safer online environment for employees, customers, advertisers and investors particularly when your platform is designed to be used by children.
Wherever users are enabled to upload images, video and live footage, there is a risk that some will abuse that service. If harmful content is allowed to appear and remain on your site, whether through neglect or overwhelm, your online community will quickly become toxic and users will leave your platform.
Responsible content moderation reduces legal risk exposure, helps to maintain a positive online experience for all users, and protects your brand integrity and revenues.
Image Analyzer is a member of the Online Safety Tech Industry Association (OSTIA). We hold US and European patents for our AI-powered content moderation technology, Image Analyzer Visual Intelligence Service (IAVIS), which identifies visual risks in milliseconds, with near zero false positives. Organizations use IAVIS to automatically moderate previously unseen images, video and live-streamed footage uploaded by people using or misusing their access to digital platforms.
IAVIS helps organizations to combat workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage. By automatically removing more than 90% of harmful visual material, it enables organizations to uphold brand values and aids compliance with user safety regulations requiring the timely removal of harmful visual material. Its ability to instantly detect visual threats within newly created images and live-streamed video prevents them from being uploaded to digital platforms where they could cause further harm to other site users, particularly children.
Applying AI-powered visual content moderation that is trained to identify specific visual threats, IAVIS gives each piece of content a risk probability score and speeds the review of users’ posts. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. IAVIS can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.
If your organization is striving to maintain user safety, protect your online community from harmful visual material and comply with impending laws, we are here to help you.
To discuss your content moderation needs, please email us or request a demo.
Online Safety Tech Industry Association (OSTIA) https://ostia.org.uk/
Business for Social Responsibility, ‘Facebook in Myanmar – human rights impact assessment,’ November 5th 2018 https://www.bsr.org/en/our-insights/blog-view/facebook-in-myanmar-human-rights-impact-assessment
BBC, ‘Facebook admits it was used to ‘incite offline violence’ in Myanmar,’ November 6th 2018, https://www.bbc.co.uk/news/world-asia-46105934
Image Analyzer – Online Safety Legislation: Section 230, EU Digital Services Act, UK Online Safety Bill https://image-analyzer.com/online-safety-legislation/
The Hill, Chris Mills Rodrigo, ‘Anti-trust content moderation to dominate tech policy in 2021,’ December 7th 2020 https://thehill-com.cdn.ampproject.org/c/s/thehill.com/policy/technology/528861-antitrust-content-moderation-to-dominate-tech-policy-in-2021?amp
CNBC, Lauren Feiner, ‘Biden advisor, Bruce Reed, hints that Section 230 needs reform,; December 2nd 2020: https://www.cnbc.com/2020/12/02/biden-advisor-bruce-reed-hints-that-section-230-needs-reform.html
Financial Times, Hannah Murphy, Kiran Stacey, ‘Now Republicans and Democrats alike want to reign in big tech,’ January 12th 2021 https://www.ft.com/content/e7c1a64f-b2d9-423b-a86c-f36d1c4e71b7?segmentId=bf7fa2fd-67ee-cdfa-8261-b2a3edbdf916
The Daily Telegraph, Charles Hymas, Social media firms face being banned from UK for online harms, says minister,’ October 7th 2020, https://www.telegraph.co.uk/news/2020/10/07/social-media-firms-face-banned-uk-online-harms-says-minister/
MediaWrites, Bryony Hurst, Theo Rees-Bidder, Bird & Bird, ‘Online Harms in the UK – significant new obligations for online companies and fines of up to 10 percent of annual global turnover for breach’, December 16th 2020 https://mediawrites.law/online-harms-in-the-uk-significant-new-obligations-for-online-companies-and-fines-of-up-to-10-of-annual-global-turnover-for-breach/
The Guardian, Alex Hern, ‘Online Harms Bill – firms may face multibillion pound fines for content.’ December 15th 2020 https://www.theguardian.com/technology/2020/dec/15/online-harms-bill-firms-may-face-multibillion-pound-fines-for-content
TechCrunch, Natasha Lomas, ‘Online Harms Bill coming next year will propose fines of up to 10% of annual turnover for breaching duty of care rules.’ December 14th 2020 https://techcrunch.com/2020/12/14/uk-online-harms-bill-coming-next-year-will-propose-fines-of-up-to-10-of-annual-turnover-for-breaching-duty-of-care-rules/
Silicon UK,Tom Jowitt,‘Online Safety Bill could fine firms 10 percent of turnover,’ December 15th 2020 https://www.silicon.co.uk/e-management/social-laws/online-safety-bill-fine-10-percent-349526?cmpredirect