Content Moderation And The Effective Use Of AI In The World Of Child Online Safety

Published by Wranga | November 25, 2022
Content Moderation And The Effective Use Of AI In The World Of Child Online Safety


Written by Venkatesh Ramamrat

Content Moderation is a process that regulates and monitors user-generated content by establishing pre-arranged guidelines and rules. These rules are then implemented, often through AI content moderation such as Wranga’s proprietary AI. It’s hence important to understand the various stages of how content is created, analysed, and acted upon.

Content Moderation And The Effective Use Of AI In The World Of Child Online Safety

Pre-moderation

Pre-moderation involves assigning moderators to check your audience’s content submissions before they are made public. The purpose is to ensure that the content is compliant with certain criteria, with the aim of protecting the online community from harm or legal threats that can negatively impact both customers and the business.

Post-moderation

With post-moderation, real-time content submissions are allowed, and users can report content deemed as harmful after the fact. The AI review process follows the same workflow as pre-moderation, where harmful content is automatically deleted based on established criteria.

Reactive moderation

Some online communities have established so-called ‘house rules’. These communities rely on members to flag any content they identify to be in breach of regulations, or that is otherwise offensive or undesirable.. It can be utilised with pre and post-moderation methods, as an extra layer of protection in case the AI technology misses anything.

Distributed moderation

This method allows community members to use a rating system to cast votes on content submissions. After the ratings are submitted, the average rating score determines whether the content is successfully submitted, based on whether it is deemed in line with the community’s rules.

AI-powered moderation

As made evident by the statistics, there is a misalignment between the amount of User Generated Content posted online, and human moderation capabilities. This leads us to a solution for content moderation automation.

Need for AI

The ongoing increase in user-generated content makes it difficult for human moderators to deal with big volumes of information. The challenge to manually check for online content becomes even more immense for moderators as social media changes the expectations of users, who might be more demanding and less tolerant toward online content sharing rules and guidelines. This is where AI-powered content moderation comes in.

Content Moderation And The Effective Use Of AI In The World Of Child Online Safety

According to Statista, every minute, 240,000 images are shared on Facebook, 65,000 images are posted on Instagram, and 575,000 tweets are posted on Twitter.

According to study results from Polaris Market Research, the global user-generated content platform market was worth over $3 billion in 2020, with projections to grow at a CAGR of 27.1%, reaching more than $20 billion by 2028.

According to research results from Statista, about 500 hours of video were uploaded to YouTube every minute as of February 2020.

Moderation AI technology:

Natural language processing (NPL), AI and machine learning-based models have opened up significantly more sophisticated interventions with even more readily available classification. AI/ML can also analyse broader patterns — not just text, but also communication that includes voice transcriptions, in identifying behaviours like griefing, or giving other players a hard time.

Text

  • Natural Language Processing algorithms are used to understand the intended meaning behind text and decipher the emotions.
  • Sentiment Analysis can identify the tone of a given text and group it into the categories of bullying, anger, harassment, sarcasm
  • Entity recognition is another AI content moderation technique that extracts names, locations, and companies.

Voice

Voice analysis leverages several other AI-powered solutions and can include things like translating voice to text, running NLP and Sentiment Analysis, to even interpreting the tone of voice.

Image

Image content moderation automation uses text classification alongside vision-based search techniques. If there happens to be text within the image, object character recognition (OCR) is able to moderate the entire content piece. Computer Vision is a subcategory of AI, which trains computers to comprehend and analyse the visual world in order to identify harmful images. The AI content moderation comprehends, tags, and if needed, notifies the moderation team of any offensive and disturbing content.

Video

Video content moderation automation uses a mix of the previously discussed voice analysis, text, and image technology.

Content Moderation and Children

To ensure child safety online in the field of content moderation, Wranga works on a much more critical task of analysing the quantum of data in the form of videos, movies, apps, games which are appropriate for children. We have created a proprietary AI model which will soon be put into development so that it can be deployed for content moderation on various platforms . To enable the transition, we work with the the ecosystem of Digital Parenting with the following stakeholders:

  • Government and Child Policy Makers: Wranga monitors child specific market-specific content trends, identifies policy gaps, provides recommendations, and deploy policy changes to ensure that the data delivered will be based on decisions made by moderators aligned with the latest and most comprehensive policy guidance.
  • Manage Cultural Bias: By closely working in the field, which are students in schools and parents in India, our understanding of cultural context and demographics allows us to train the algorithm , by accurately defining the demographics required and handle all aspects of diversity sourcing so that the data feeding into a model is not subject to a bias.
  • Digital Parenting Experts: Content moderation decisions are susceptible to scrutiny.By being experts Globally in the field of Digital Parenting , we are able to connect to the ecosystem of experts and understand the best processes and invite experts to share knowledge and collaborate to make the system more robust and accurate.

This is such a mammoth task that we are always looking to collaborate and share with other individuals and organisations in the field of digital parenting. A lot of these knowledge sharing sessions will be soon available on the Wranga App, where we can find talks with our founder, Amitabh Kumar with leading global parenting experts on Wranga Canvas.