Close

Choose a country

United KingdomEnglish
South AfricaEnglish
IndiaEnglish
InternationalEnglish

Content Moderation: AI vs Human Moderation

Blog
26th May 2020

Did you know that since the onset of the Covid-19 pandemic Facebook’s daily active users have gone well over 1.73 billion source: engadget.com, Zoom’s daily meetings are over 200 million and Google’s video conferencing is 25 times higher? Source: theatlantic.com. From positive work related interactions, communities coming together to help the most vulnerable, support and encouragement for the health care workers, humorous posts about the lockdowns to negative and atrocious content such as, fake news, racists posts, child abuse and pornography, the shared content varies from one user to another. With this immense increase in content creation, uploading and consumption, the internet can become a dark place.

And because people are free to anonymously publish posts or stream live, the anonymity shield makes it easier for them to go out of line. Content moderators who manage these online communities indeed have their work cut out for them. Reviewing User Generated Content (UGC) is not only a challenging but also a very demanding task.

Tech giants like Facebook, YouTube and other social media platforms were forced to close their moderation centers in March sending most of their content moderators’ home for safety reasons making a risky bet on AI to entirely moderate content. This caused a serious void in the content reviewing of ads and posts. Despite their efforts to curb the myths and false information related to the pandemic, these social media platforms faced the harsh reality of the gruesome content like child pornography that had leaked into the internet!

As a result of the lockdowns and also the security of office-attending employees, the content reviewing system was hugely incapacitated. Consequently, Facebook had to make the hard decision of calling back its content moderators whilst ensuring safety protocols were met i.e. checking temperatures, reducing building capacity, and providing protective equipment to enable the reviewing system resume blocking away and filtering out of child exploitation, terrorism and misinforming content.

Is AI reliable enough?

Situation awareness
Last year in March, a terrorist in New Zealand live-streamed from two different mosques, the brutal killing of 51 people. Unfortunately, Facebook’s algorithms failed to timely detect and block the gruesome video. It took them 29 minutes to detect the brutality in the video which was watched live with nearly 4000 people. In the aftermath, they struggled with taking down the posts from users who reposted the video. Although the company uses the most advanced innovation and technology, its AI algorithms still failed to correctly interpret the odeal.

Content & intent discernment
One of the drawbacks facing neural networks is their inability to correctly understand content and intent. In a call with analysts, Mark Zuckerberg Facebook’s CEO stated that it is much easier to train an AI system to detect nudity than it is to distinguish what is linguistically considered hate speech. According to Facebook’s statistics, its AI system is able to correctly detect nude content 96% of the time but struggles to discern safe nudity e.g. breastfeeding and prohibited content of sexual activities.

A good example of misinterpretations of AI algorithms is when the Facebook post of Norway’s Prime Minister was flagged as child pornography because it showed the image of the famous “Napalm girl”, a naked girl fleeing from an attack in Vietnam. Later on, Facebook apologized and restored the post.

And as the Corona virus continued to surge, Facebook experienced a massive bug in its spam filter for News Feeds that flagged URLs from genuine websites like USA Today and Buzzfeed source: techcrunch.com that were sharing Corona-virus related content most likely because of content misinterpretation with the AI systems.

Societal subjectivity
Because we are intrinsically diverse, our beliefs, values, cultures and religion differ from one region to another. What is considered okay in one country might be taboo in another for example wearing a Bikini is appropriate in most cultures but considered nudity in other cultures. Since most of the Application Programming Interface (API) providers are from the U.S. and Europe, they are often not in-tune with the cultures in the conservative parts of the world. So apart from the obvious explicit content, tackling the question of what is accepted is very country and region-specific and can only be effectively approved with human moderators from the different regions to avoid false positives or negatives flagged with the AI systems.

Racial disparity
In a content moderation study conducted with Nanonets, they assessed the accuracy of two API systems in detecting a Not Safe For Work (NSFW) image. The picture was contained a nude Japanese Woman dressed in a kimono. So because the neural networks were trained with pictures of European individuals, they failed to flag the image as NSFW. Users who are not based in the EU or U.S. were able to upload offensive content without the AI systems blocking them.

Stay tuned for part two….

 


Article by: Evelyn Kamau

News & Insights

Service Provider
Solution Designer
Technology Enabler
Blog
Meet our Advisors – Romina from Madagascar (ep.1)

One of the most intense experiences I had recently was with a customer, a mother of two, who had organized a trip for her family to Tahiti. After saving for 5 years, she was ready to make her family vacation dreams come true. While making the reservation online, she, unfortunately, entered the wrong date of the departure flight. When she realized her mistake, her world was falling apart...

Read 212CAAF2-CC0E-4D90-9134-028C45BDF837 Created with sketchtool.
News
Webhelp response to the Covid-19 Coronavirus

Webhelp Business Continuity in a responsible manner Our response to the Covid-19 crisis ensures the continuity of our business for our people, our clients and their customers. We face an unprecedented time of disruption and uncertainty. From individuals, to families, to companies, we have all...

Read 212CAAF2-CC0E-4D90-9134-028C45BDF837 Created with sketchtool.
News
A message from the Founders

  ​​​Hello, We are facing an unprecedented situation today, which requires that we take an exemplary citizen's approach to protect the most vulnerable people and to do everything we can to limit the spread of this pandemic. Thank you for your mobilization in recent...

Read 212CAAF2-CC0E-4D90-9134-028C45BDF837 Created with sketchtool.
Whitepaper
The HUB #12 – Diversity In Business – January 2020

Cheers to the new year! And with the new beginnings, we are not only happy to present our 12th edition of The HUB magazine, we are also delighted to welcome our new readers into the community! In our latest issue, we take a look at one of our strengths as Webhelp: Diversity! And because we...

Read 212CAAF2-CC0E-4D90-9134-028C45BDF837 Created with sketchtool.
Case Studies
Phillip, Senior Key Account Manager, Nuremberg

...

Read 212CAAF2-CC0E-4D90-9134-028C45BDF837 Created with sketchtool.
×

Webhelp Cookies Preference Centre

Strictly Necessary Cookies
The website requires the use of cookies for essential functional requirements and these are outlined in the cookies policy.

Enhanced Functional Cookies
Some features of this website use services provided by third parties websites. These features use cookies to implement their services on this website and may collect data about your visit to help them optimize their functionality. The Webhelp cookies policy outlines the cookies used by these services.

We have links to social networking such as Twitter, Facebook and LinkedIn.
These websites are third party sites. We do not place cookies on their behalf, and do not have control over the way they collect or use your data.
We encourage you to read more about their policies:
Twitter
LinkedIn
Facebook