Using Local Insight To Improve Content Moderation | USA
Using Local Insight To Improve Content Moderation

The entire world is becoming more connected. There are around 7.6 billion people alive today and 5.1 billion of them are using a mobile device to view, download, sell and buy products and services.

Facebook remains the most used social network today, but YouTube is close and WhatsApp has evolved from being just a messaging tool to being considered a social network because of the group functionality. WeChat, which is predominantly used in China, now has more users than the global total for Instagram. Volume growth in platforms such as Uber and AirBnB is creating millions of online interactions everyday..

As I wrote in my last article, keeping track of everything being uploaded is a real challenge. All platforms and social networks are taking their responsibility to check content seriously – shielding users from hate speech, violence, harassment, spam, misinformation, fraud, and other types of abuse

But although this is a global problem, there is an interesting and problematic local dimension. It is many of the emerging economies in Asia and Africa that are adopting social tools faster than the highly developed nations, but the growth in bad content is also often growing faster in these markets. Although you might expect more bad content, as more people get online, it feels like there are some regions where the requirement to moderate content is growing faster than others.

But this is not just a question of bad actors in specific regions; there is a problem of local knowledge that makes it hard to moderate content without local insight. The spate of lynching in India where mob violence fueled by fake news and rumors shared on social networks – especially WhatsApp – is a good example. The Indian authorities have struggled to contain this violence simply because it is hard to know where a problem might occur – false rumors about child abduction can spread quickly and terrified people react violently when they feel that they are being approached by the perpetrators.

Platforms can attempt to police and control fake news, but as can be seen from the Indian example, it is often important to have local insight. If your content moderation team can see users sharing a warning about a dangerous criminal who is on the loose then this may seem harmless enough – it seems good for the public to be sharing these warnings – but if the warning is false or malicious and your content moderation team in Europe knows nothing about the problems caused by fake news in India then an opportunity to shield users will be missed.

I believe there are three steps that executives planning a content moderation strategy need to think about, beyond the basic steps of setting policies around what should be moderated:

Be proactive: always step in and check. Innocuous content may not be so innocent when local news or context is applied so keep on checking even if the content passes the most obvious checks, for instance violence or nudity.

Be aware of local threats: take an active interest in understanding the online trends in the areas where your customers are located. What is driving local social volumes up and what type of content are people sharing? Are local news stories creating specific trends that could become dangerous?

Optimize your location strategy: I would recommend not placing your entire content moderation team in a single locations or multilingual hubs. As I mentioned with the earlier example focused on India, moderators without an awareness of the local issue may feel that there is no danger in the content – you need local insight and knowledge to effectively moderate content that is being produced globally. The easiest way to do this is by having a central team that works with some satellite teams in regions where local insight is most required.

Social media has largely been a force for good, bringing friends and family closer together, but now it is so ubiquitous there is a greater need than ever to ensure that users are protected from malicious content.

Leave a comment here if you have any questions about local insight into content moderation or get in touch directly via my LinkedIn. In my next article I plan to explore the difficulties of moderating content that is being streamed live – how do you check on content that is being uploaded as it is being created?

Photo by Tim Reckmann licensed under Creative Commons.


What do you think about this post?
Leave us your comment.

avatar

*