Activision rolls out AI tools to auto-flag in-game abuse for Call of Duty

  • Research shows that a majority of online video games players have received abuse
  • Activision is deployed a tool, ToxMod, to auto-detect and flag in-game abuse at scale
  • Flagged conversations are passed onto human moderators for further analysis
AI WarZone

Activision has enlisted the aid of Modulate’s AI technology, known as ToxMod, to monitor conversations in the Call of Duty series.

According to a blog post on, ToxMod is designed to perform real-time sentiment and intent analysis to detect in-game abuse and flag it for human moderation. 

This means it could potentially identify not just overt hate speech and harassment but also subtler forms of misbehavior.

Up until now, ToxMod has mainly been deployed in smaller VR titles like Among Us. 

ToxMod flags potentially abusive in-game speech and passes it on for review by Activision’s human moderators.

If found to be in violation of Call of Duty’s official code of conduct – which disallows derogatory comments based on race, gender identity or expression, sexual orientation, age, culture, faith, mental or physical abilities, or country of origin – players could face penalties. 

These penalties range from a two-day suspension to lifetime bans for extreme or repeated offenses.

ToxMod is being beta-tested in Call of Duty: Modern Warfare II and Warzone. A full-scale launch is expected alongside the release of Modern Warfare III this coming November.

For now, only English-language chats are under surveillance, but Activision intends to extend its language support in the future.

How does ToxMod work?

Traditionally, the burden has been on players to report toxic behavior. ToxMod changes this by proactively identifying and addressing harmful interactions. 

ToxMod reports that 67% of multiplayer gamers have said they would likely quit playing a game if they encountered toxic behavior, and 83% of adult gamers across all demographics have reported facing toxicity online.

The AI technology is built to understand conversational nuances, from tone and emotion to intent. It can allegedly distinguish between friendly banter and genuine toxicity. After triaging and analyzing potentially harmful speech, it passes on the most likely cases of genuine abuse to human moderators. 

According to ToxMod, all data is anonymized and secured according to ISO 27001 standards, and Modulate guarantees that the data will neither be sold nor rented.

Gamers may suggest that ToxMod jeopardizes free speech – which should possibly be extended to digital environments – though this is the prerogative of the game developers and platforms. 

If ToxMod does turn out to be overly Draconian, the Call of Duty lobby could become quite a sterile environment.

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2023 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions