Isolating Online Risk With Content Moderation | USA
Isolating Online Risk With Content Moderation

There are over 1.3 billion people regularly using YouTube and if you add all the other popular video sharing platforms then billions of hours of video content are being regularly consumed online. Google has stated that over 5 billion videos are viewed on YouTube daily and each minute an additional 300 hours of video is uploaded.

Many of the online giants, including Facebook, Google, Twitter, and Snap, are all based in California. This gives them, in general, an American-focused culture that promotes free speech. Banning content or checking everything that is uploaded by users goes against what most people believe is their right to freedom of speech protected by the first amendment.

However, in the real world, there are some people who abuse these public platforms in the modern-day equivalent of a tourist visiting a historically-significant monument and scratching their initials into the side. Some people actively use open platforms for material that most people do want to be shielded from: hate speech, violence, harassment, spam, and fraud.

Companies such as Google and Facebook have accepted that they have a duty to protect their users from the kind of material that most people would find offensive. However, there are many different approaches that can be taken:

  1. Check everything before publishing it
  2. Allow the users to report anything offensive
  3. Use technology to automatically detect risk of a violation
  4. Prioritize checks based on brand risk and user satisfaction

Option 1 works in an environment like the Google Play store. Google will not allow an app to be published on their Android platform unless they have seen it and checked that it complies with their standards. That’s okay for apps because although there are over 2.6 million apps on Google Play, the developers are happy to wait for a couple of days when they first upload a new app – it’s like an initiation process.

Clearly, this may not work for a platform like YouTube. With over 300 hours of content being uploaded every minute and accelerating – you would need a team of at least 54,000 people just watching videos every minute of every day. That’s not even accounting for sleep, sickness, vacations, or delays when content needs to be reported. Basically, you would need more than a football stadium full of people just watching content constantly.

Option 2 has been used in the past, but it is becoming unacceptable by itself because the sheer volume of offensive material being uploaded means that most users will start regularly seeing something they feel they need to report. Once that happens, the user experience sharply declines and advertisers start complaining about seeing their ads placed next to extremely offensive material.

The best solution being deployed at present is a blend of option 2,3, and 4. Give users the option to report anything they find offensive, but also deploy artificial intelligence so every piece of content is scanned as it is uploaded. Most offensive videos can be captured and sent for review before they are ever available on the site.

This is where our team gets involved. Content that is reported as suspect by the AI system or a user enters a content moderation process where our team will use specific agreed rules to review the content. Sometimes, it has been mistakenly flagged, but usually there is a good reason why it has been flagged for additional checks. Our team can make that final call on whether it should be published or not.

Publishing offensive content is not just offensive to the individual users of a service like YouTube, it can cause serious damage to the reputation of the company hosting the offensive content, and if their business model is focused on automatically placing adverts around content then it can quickly lead to advertisers taking their budget to other networks.

Leave a comment here if you have any questions about how a blended content moderation service works or get in touch directly via my LinkedIn.

Photo by Esther Vargas licensed under Creative Commons.

What do you think about this post?
Leave us your comment.