image-analyzer-blog

IA Blog: How Image Analyzer contributes to creating a safer internet

Written by Wendy Shore | Feb 8, 2022 9:14:03 AM

8th February 2022 by Cris Pikes, CEO Image Analyzer

Safer Internet Day is observed on 8th February each year to promote the positive aspects of digital technology and to highlight young people’s role in creating safer spaces online. As a member of the Online Safety Tech Industry Association (OSTIA) this is an important day for Image Analyzer to reflect on how we can contribute to creating safer online spaces.

Young people are accustomed to online learning and interacting with each other through mobile messaging and online gaming and many are adept at creating their own video and streaming content. As the providers of such a crucial learning and communication environment, online platform operators have a huge responsibility to prevent their service users from being harmed by the content and people that they encounter online.

When Meta announced plans to add end-to-end encryption to Messenger and Instagram, the UK Government sounded the alarm because it could prevent child protection agencies from doing their job. This fear is based on some very sad statistics. According to NSPCC figures, 52% of the 9,470 cases of child sexual abuse images reported to police forces in England, Wales and Scotland were shared using Meta-owned apps.

The police depend on digital evidence to be able to prosecute abusers and protect children. Encrypted messages could shroud evidence of grooming and coercion of young people and the sharing of image-based abuse and self-harm videos. The NSPCC has estimated that up to 70% of digital evidence could be hidden as a result of end-to-end encryption being added to Instagram.

The UK government is also concerned that end-to-end encryption could hamper Ofcom from enforcing the Online Safety Bill when this becomes law.

Of course, there is a very delicate balance between protecting children from predators, who deliberately target the messaging services that children predominantly use, and protecting the privacy of law-abiding citizens’ communications.

In response to this thorny issue, in August 2021 the UK government set up its Safety Tech Challenge Fund, to spur the development of new AI-powered technologies that can automatically scan message content, without impacting the privacy of legitimate service users.

In November, encryption platform provider, Galaxkey, Image Analyzer, and age-verification technology company, Yoti, were jointly awarded a portion of the Safety Tech Challenge Fund to collaboratively develop AI-powered visual content analysis technology that can automatically detect child sexual abuse material that predators try to share within end-to-end encrypted environments.

The technology is designed to assist law enforcement agencies and also to help online platform operators to meet their duty of care towards their service users. This ground-breaking technology collaboration will clean data streams and preserve the privacy of legitimate encrypted service users. The Safety Tech Challenge Award recipients are due to present their proofs of concept to the UK government next month.

On Internet Safety Day and every day, Image Analyzer works to assist technology companies to abide by their legal and moral duty to operate safe online services, particularly those used by children. We want our children to gain all of the benefits of the digital arena without exposing them to preventable risks.

To discover more about how Image Analyzer could help you to make your online service safer for users please contact us for a demonstration

Background information:

Image Analyzer holds US and European patents for its AI-based content moderation technology, Image Analyzer Visual Intelligence Service (IAVIS), which identifies visual risks in milliseconds, with near zero false positives. Organizations use IAVIS to automatically moderate previously unseen images, video and live-streamed footage uploaded by people misusing their access to digital platforms.

IAVIS helps organizations to combat workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage. By automatically removing more than 90% of harmful visual material, IAVIS enables organizations to uphold brand values and aids compliance with user safety regulations requiring the timely removal of harmful visual material. Its ability to instantly detect visual threats within newly-created images and live-streamed video prevents them from being uploaded to digital platforms where they could cause further harm to other site users, particularly children.

Applying AI-powered visual content moderation that is trained to identify specific visual threats, IAVIS gives each piece of content a risk probability score and speeds the review of users’ posts. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow human moderators to easily interpret threat category labels and probability scores. IAVIS can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.

Organizations are striving to create and maintain a positive user experience through user safety by protecting their online communities from harmful visual material and to comply with impending laws. We are here to help you.

For further information on IAVIS, please email info@image-analyzer.com, or contact us for more information

References:

Safer Internet Day 2022 https://saferinternet.org.uk/safer-internet-day/safer-internet-day-2022

Gov.UK, press release, 17th November 2021: ‘Government funds new tech in fight against online child abuse’, https://www.gov.uk/government/news/government-funds-new-tech-in-the-fight-against-online-child-abuse

 Safety Tech Network, 8th September 2021, ‘Government launches Safety Tech Challenge Fund to tackle online child abuse in end-to-end encrypted services’ https://www.safetytechnetwork.org.uk/articles/government-launches-safety-tech-challenge-fund-to-tackle-online-child-abuse-in-end-to-end-encrypted-services

 Safety Tech Network, ‘Safety Tech Challenge Fund’, https://www.safetytechnetwork.org.uk/innovation-challenges/safety-tech-challenge-fund

Safety Tech – unleashing the potential for a safer internet https://vimeo.com/527425089

Gov.UK, ‘Draft Online Safety Bill’, 12th May 2021 https://www.gov.uk/government/publications/draft-online-safety-bill

The Daily Telegraph, ‘Priti Patel accuses Facebook of putting profit before children’s safety,’ 19th April 2021 https://www.telegraph.co.uk/news/2021/04/18/priti-patel-accuses-facebook-putting-profit-childrens-safety/

The Guardian, ‘Priti Patel says tech companies have moral duty to safeguard children,’ 18th April 2021, https://www.theguardian.com/society/2021/apr/19/priti-patel-says-tech-companies-have-moral-duty-to-safeguard-children

Image Analyzer is a world leader in the provision of automated content moderation and user generated content moderation for image, video and streaming media. Our AI content moderation delivers unparalleled levels of accuracy, producing near-zero false-positive results in milliseconds. If you have any questions about our content moderation software or Image moderation software, please get in contact today.