Y8 Robot

Automation has become of greater importance in managing large media websites. In 2017 Y8 Games began its first experiment with more advanced automation that leveraged neural networks. The advancement meant software could make decisions autonomously. The goal was to limit the exposure of damaging imagery to people who’s job is to moderate user content. It was a success despite many internal arguments. Over time, the Ai moderator has decided on tens of thousands of images uploaded by players. However, it became more apparent that the system inherited racial biases from the data used to train the neural network. With the death of George Floyd, it became clear that we needed to make real changes to ensure the system was making decisions without racial bias. Here is what we changed to make Y8’s Ai systems more accountable.

  • New testing data to detect racially flawed decision making
  • Increased scrutiny with the data to prevent selection bias inherited by moderators
  • Thoughtful removal of data that may cause surprising decisions
  • Importing of external data sources to limit information bias from our limited data set

Combining the above strategies, we were able to retrain the Ai to make fairer decisions. For players, this means less frustrating interactions when uploading content while still providing instant results when possible and offloading the remaining work to people when the Ai is not sure. With this change, Y8 Games will continue to protect human workers from seeing disturbing content and provide players with fast and just services. We invite you to join us in the ongoing fight to keep automation unbiased by using your voice when you see unjust actions.