Managing Content Moderation In A Social Society
Managing Content Moderation In A Social Society

Content moderation is increasingly in the news today. It might be easy to assume that Artificial Intelligence (AI) systems can detect offensive social media posts and automatically delete them, but online moderation isn’t quite as simple as that. It takes equal parts of automation as it does humans. It takes the company and the community.

The figures from YouTube give an example why content moderation is so difficult. Every day users upload around 432,000 hours of new video content. That’s 18,000 hours of new videos on YouTube every actual hour that passes! If human moderators watched every hour of new video footage and worked 8 hours a day then a team of at least 50,000 people would need to be employed – constantly – just to watch new content.

The amount of new content is growing exponentially, especially as phones with unlimited data plans become more common, therefore encouraging people to not just make a video then upload it – increasingly users are just streaming live to their social network using tools such as Facebook Live or Periscope. Before long, people will be life-streaming everything they do live in real-time.

Earlier this month, Facebook published a report detailing their own efforts to moderate content uploaded to their network. It’s notable that in the first quarter of this year Facebook deleted 1.9 million posts related to Al Qaeda or ISIS – up from 1.1 million in the same period last year. Questionable content that needs to be deleted can be related to terrorism, racism or hate speech, pornography, or violence.

Facebook is often criticized when offensive content remains for too long on their network. They have their own screening systems, but occasionally offensive content will get through and then they rely on the community to report offensive content. However, as they note in the recent report, Facebook removed 99.5% of terrorism-related posts before any users ever saw the content.

Facebook is not the only social network under close scrutiny, but they have been forthright in ensuring that their efforts to improve content moderation are not just in place – they are highly visible too. The company has plans to double their safety and security team to over 20,000 people this year.

As the Facebook and YouTube statistics indicate, maintaining a safe environment online is not easy. Though all the social networks have community-monitoring systems in place, this cannot be the only line of defense, as that would mean most users needing to act as censors in their daily networking activities.

Fake accounts, spam, hate speech, nudity, graphic violence, and terrorism are all problems that the social networks need to control to make the online environment safe for all users. Facebook’s focus on transparency and regular reporting is a great start that I hope will be replicated by all the major social networks. With an increase in human moderators and improved AI scanning systems it should be possible to stay one step ahead of those who want to pollute the online environment.


What do you think about this post?
Leave us your comment.

avatar

*