2022-03-28

Fostering trust & safety for the gaming community and metaverse

Author: Vitalii (Vit) Vashchuk Director, Co-Head Gaming Technologies at EPAM

As the gaming community continues to grow and gamers are becoming more connected than ever before, the risks that players face are simultaneously increasing.

Today, gaming is larger than the global music, film and on-demand entertainment sectors combined with 2.9 billion gamers worldwide. And the numbers show no sign of stopping. As the gaming community continues to grow and gamers are becoming more connected than ever before, the risks that players face are simultaneously increasing. With metaverse picking up steam, there are many evolving factors at play to consider in your trust and strategy.

Media companies invest large amounts of resources to protect users from inappropriate and harmful material, securing an ever-expanding gaming ecosystem with magnitudes of user-generated content has proven to be an enormous challenge. Gaming companies are trying to find the best way to strike a balance between providing immersive, next-generation experiences while adhering to consumer protection laws. This is where content moderation services come into play. These services are one tool that game developers can use when designing and implementing an effective security strategy to protect online communities without limiting engagement and creativity.

Building the ideal trust and safety platform

When thinking through the ideal trust and safety needed for game development, your content moderation tool must be built around data-driven decision-making, the well-being of moderators and the throughput of moderated content. The tool should adjust to different product policies and content toleration levels while gathering information from many data points and conducting analytics. There are five fundamental building blocks of a content moderation platform that should be considered:

  1. Policies and Regulations: the cornerstones of any platform as community rules
  2. Human Moderators: labeling, verification and solving edge cases still requires people, even for companies at the advanced level
  3. Automated Moderation Pipeline: helps to automate and label incoming content as objectionable using artificial intelligence (AI), accelerating the moderation process and reducing (or fully preventing) humans from interacting with content
  4. Data Analytics: helps to understand the content, providing performance and accuracy metrics
  5. System Management: the place where you can control and configure the whole system and gain operational insights

Based on where the gaming company is in implementing these foundational aspects, they can be considered in one of three trust and safety levels:

  • Starter: basic human moderation, no recognition tools of harmful content, policies and regulations are in their infancy, lots of inappropriate content
  • Medium: basic AI recognition tools and certain level of community policies in place 
  • Advanced: organization-owned platform and mature policies, massive staff of moderators, investment in trained AI models, most of the harmful content gets deleted before anyone, or at least most users, can see it

For a company that is considered ‘advanced’ in their trust and safety strategy, this is an example of what the content moderation platform may look like:

Building the Ideal Trust & Safety Platform

The power of AI and the human element

With the boom in user-generated content, AI became an important moderation tool for gaming companies, however, it cannot moderate on its own – it needs rules and models to operate ethically. To be more predictably effective, businesses need to deploy a hybrid model of both human and AI moderation into their platform. If the system is missing any of the building blocks listed above or has other limitations, it is necessary for humans to step in and help the system by either manually labeling violations or making the decisions themselves. It’s important at this stage to prevent humans from being continually exposed to highly graphic content.

There are several AI solutions on the market that can be used to address this concern; however to get to that ‘advanced’ level, companies likely will need to adjust these generic solutions to fit their own needs. AI can be used to enhance the metadata to enable better decision-making for humans and avoid exposure to graphic content. These solution mechanisms should deliver content to the most relevant human agent in a safe way to keep moderation speed and accuracy balanced.

Looking ahead to the future

Online play and the metaverse provides players and community with the opportunity to have memorable and enjoyable experiences, especially during a time when traditional social interaction may not be possible. Permitting a completely unmoderated space is a reckless practice; however, balancing users’ desires to participate in their own content while providing a secure ecosystem for the community is not an easy prospect. Game developers shoulder an important responsibility beyond entertainment; to protect their online communities from potentially harmful content. With the right combination of community guidelines and ethical AI, an effective content moderation platform can be introduced to provide a safe space for everyone.

View the original article here.

gallery image