An especially deep cut from Krakovna’s database revolves around the 2000s research game NERO, where competing players had armies of robots whose intelligences evolved over the course of a match. In one particular match, the robots evolved to find a way to wiggle over the top of player-built walls by turning back and forth in a way that exploited a bug in the game’s engine. They had unintentionally discovered a way to break the game, showing both the shortcomings and occasional genius of machine learning AIs.

Advertisement

It’s this potential for self-experimentation that’s led the DeepMind project to invest so much in trying to learn complex games like Blizzard’s StarCraft II. It was revealed at BlizzCon 2017 that Google would be teaching its AI how to play the real-time strategy game, and though it hasn’t yet faced top human players, Blizzard announced at this year’s BlizzCon that it had so far managed to successfully beat the game’s AI on the hardest difficulty using advanced rushing strategies. DeepMind has already beaten some of the world’s best human players in Go, and taking on pros in games with more variables like StarCraft II will be the next test. Hopefully it doesn’t find a way to cheat.