Written by Venkatesh Ramamrat
Content Moderation is a process that regulates and monitors user-generated content by establishing pre-arranged guidelines and rules. These rules are then implemented, often through AI content moderation such as Wranga’s proprietary AI. It’s hence important to understand the various stages of how content is created, analysed, and acted upon.
Pre-moderation involves assigning moderators to check your audience’s content submissions before they are made public. The purpose is to ensure that the content is compliant with certain criteria, with the aim of protecting the online community from harm or legal threats that can negatively impact both customers and the business.
With post-moderation, real-time content submissions are allowed, and users can report content deemed as harmful after the fact. The AI review process follows the same workflow as pre-moderation, where harmful content is automatically deleted based on established criteria.
Some online communities have established so-called ‘house rules’. These communities rely on members to flag any content they identify to be in breach of regulations, or that is otherwise offensive or undesirable.. It can be utilised with pre and post-moderation methods, as an extra layer of protection in case the AI technology misses anything.
This method allows community members to use a rating system to cast votes on content submissions. After the ratings are submitted, the average rating score determines whether the content is successfully submitted, based on whether it is deemed in line with the community’s rules.
As made evident by the statistics, there is a misalignment between the amount of User Generated Content posted online, and human moderation capabilities. This leads us to a solution for content moderation automation.
Need for AI
The ongoing increase in user-generated content makes it difficult for human moderators to deal with big volumes of information. The challenge to manually check for online content becomes even more immense for moderators as social media changes the expectations of users, who might be more demanding and less tolerant toward online content sharing rules and guidelines. This is where AI-powered content moderation comes in.
According to Statista, every minute, 240,000 images are shared on Facebook, 65,000 images are posted on Instagram, and 575,000 tweets are posted on Twitter.
According to study results from Polaris Market Research, the global user-generated content platform market was worth over $3 billion in 2020, with projections to grow at a CAGR of 27.1%, reaching more than $20 billion by 2028.
According to research results from Statista, about 500 hours of video were uploaded to YouTube every minute as of February 2020.
Moderation AI technology:
Natural language processing (NPL), AI and machine learning-based models have opened up significantly more sophisticated interventions with even more readily available classification. AI/ML can also analyse broader patterns — not just text, but also communication that includes voice transcriptions, in identifying behaviours like griefing, or giving other players a hard time.
Voice analysis leverages several other AI-powered solutions and can include things like translating voice to text, running NLP and Sentiment Analysis, to even interpreting the tone of voice.
Image content moderation automation uses text classification alongside vision-based search techniques. If there happens to be text within the image, object character recognition (OCR) is able to moderate the entire content piece. Computer Vision is a subcategory of AI, which trains computers to comprehend and analyse the visual world in order to identify harmful images. The AI content moderation comprehends, tags, and if needed, notifies the moderation team of any offensive and disturbing content.
Video content moderation automation uses a mix of the previously discussed voice analysis, text, and image technology.
Content Moderation and Children
To ensure child safety online in the field of content moderation, Wranga works on a much more critical task of analysing the quantum of data in the form of videos, movies, apps, games which are appropriate for children. We have created a proprietary AI model which will soon be put into development so that it can be deployed for content moderation on various platforms . To enable the transition, we work with the the ecosystem of Digital Parenting with the following stakeholders:
This is such a mammoth task that we are always looking to collaborate and share with other individuals and organisations in the field of digital parenting. A lot of these knowledge sharing sessions will be soon available on the Wranga App, where we can find talks with our founder, Amitabh Kumar with leading global parenting experts on Wranga Canvas.