However, at the moment, AI detection systems are notoriously fickle and can produce false positives, especially with non-native English speakers. Given variations in audio quality, regional accents, and various languages, it’s a tall order for a voice-detection system to work flawlessly under those conditions. Activision says a human will remain in the loop for enforcement actions:
Detection happens in real time, with the system categorizing and flagging toxic language based on the Call of Duty Code of Conduct as it is detected. Detected violations of the Code of Conduct may require additional reviews of associated recordings to identify context before enforcement is determined. Therefore, actions taken will not be instantaneous. As the system grows, our processes and response times will evolve.”
Further, Activision says that Call of Duty’s voice chat moderation system “only submits reports about toxic behavior, categorized by its type of behavior and a rated level of severity based on an evolving model.” Humans then determine whether they will enforce voice chat moderation violations.
The new moderation system began a beta test starting Wednesday, covering North America initially and focusing on the existing games Call of Duty: Modern Warfare II and Call of Duty: Warzone. The full rollout of the moderation technology, excluding Asia, is planned to coincide with the launch of Modern Warfare III, beginning in English, with additional languages added over time.
Despite the potential drawbacks of false positives, there’s no way to opt out of AI listening in. As Activision’s FAQ says, “Players that do not wish to have their voice moderated can disable in-game voice chat in the settings menu.”