image-analyzer-blog

Social Networks Can Help Protect Kids Online - Image Analyzer

Written by Wendy Shore | Aug 8, 2019 9:24:19 AM

By Jess Ullrich Posted October 2nd 2014

Amy Williams was fixing breakfast one morning when her 12 year-old daughter Emma burst into tears. It turned out that Emma, who was active on a popular social networking site, was being harassed online. This came on the heels of some pornographic content having made it onto Emma’s friend feed before it was flagged as explicit and harmful. Williams did the only thing she could: pulled the plug on her daughter’s involvement with the site.

“She enjoys her time on social media, I know,” says Williams. “But as her parent if I can’t trust the network to control what’s she’s seeing, I have to make a hard decision.”

Unfortunately Williams isn’t alone. For parents in the digital age, protecting their kids online is a top concern. According to a recent study by Internet safety company Knowthenet, 59% of kids under age 10 have joined a social network. While many larger social networking sites don’t allow users under 13 to create profiles, they’ve come under fire for having ineffective age verification systems. Increasingly, families talk to their kids about the importance of online safety, while others monitor their children’s use of social media with dedicated software programs.

Unfortunately, discussion and monitoring can only go so far when it comes to protecting kids online. There’s a growing trend among social media sites struggling to find the best way to handle growing issues like cyberbullying and inappropriate content.

Abusive comments, photos, and videos can easily end up going live on social networks before human moderators are able to flag and remove them. This is especially true on social media sites that aren’t taking a proactive approach to moderation.

Chris Priebe, Founder & CEO of CommunitySift, a text classifier and risk management software for online communities, says that website owners who don’t have effective systems in place are putting both their users and their businesses at risk.

“A community can quickly become toxic,” Mr. Priebe says, “And if a community becomes toxic, it’s really hard to bring it back. Many times when a social network is getting started, 1 or 2% of the users will try to push boundaries. If there are no boundaries in place, other people start. What happens then is that the good members start to leave, and that small percentage of users starts taking control of the community. This is something that’s completely avoidable.”

As a result, social networks and sites that build communities are forced to take increasingly strong steps toward managing user generated content. But if you’re a community manager or product developer who is struggling with determining the right steps forward for this issue, what should you do? After interviewing several experts in the field, three tips emerged.

1. Set Clear Community Standards and Enforce Them.

With nearly two-thirds of kids under age 10 reporting that they are on social media, it’s critical for social networks to have clear terms of service that are strictly enforced. As Chris Priebe mentions, there are always going to be a handful of community members that attempt to push the boundaries. Illustrating that your business takes these boundaries seriously can help reduce the risk for your site members and your company. Establishing guidelines in advance, in consultation with legal counsel, gives you the foundation to remove users and pursue other actions if interactions take an unexpected turn.

2. Employ A Filtering System.

One approach to protecting the youngest members of your social networking site is to employ a filtering system for content that could be considered harmful. Different sites approach filtering in unique ways. For example, one social network may choose to develop an age-based rating system that filters content that meets certain designated criteria. Another may choose to pre-moderate user generated content, adding it to a queue for review. Whatever approach you take, a filtering system can help ensure that offensive content never reaches your users.

3. Pair An Integrated Content Solution With Human Moderators.

Another important conclusion that emerged from this discussion is the importance of taking an integrated approach to managing user generated content. As social networks begin to grow, the sheer amount of content being created daily can become unmanageable for a team of human reviewers. “The task of identifying pornographic media within the mass of legitimate content is a daunting challenge,” says Crispin Pikes, CEO and Founder of Image Analyzer, a UK-based image scanning technology provider. In terms of volume and cost, there comes a point where it makes sense to employ an automated solution to complement your team of human reviewers. Choosing a customizable program that can be tailored to meet your business’ needs can help reduce the workload on your reviewing staff and cut costs for your business. Pikes goes on to say that, “Most providers deploy manual moderation teams to review each image which is a costly solution that simply does not scale as volumes increase.”

Today’s parents face unique challenges when it comes to protecting their kids online. Unfortunately, talking to your children about online safety sometimes isn’t enough. If social networking sites don’t have adequate systems in place to protect their users, community members can be exposed to harmful content. Tragedies can happen as a result of this exposure.