Before you get too excited (or maybe insanely depressed as you imagine a toaster holding aloft the Magic World Championship trophy on its ejection lever), there are no plans to set the AI loose on playing these popular card games. At least not yet.
For now, the folks over at Oxford University are happy enough for DeepMind to analyse card data and transform it into code. Essentially, the task it is being set is one of translating the data from human to machine speak and while the cards have their own game “language” and structure, they can certainly throw some curveballs.
Here it is explained in their words:
Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone.
For example, elements such as a card’s resource cost never really change in nature and are easily deciphered, however, a card’s text might specify that the cost is increased or reduced based on another condition.
As you can imagine, writing a program that can analyse and account for these changes in card logic and translate them into arbitrary code is no trivial task. Rather than write the mother of all if/else statements, they’ve resorted to the use of DeepMind instead.
By giving it enough data — all eleventy billion or so Magic cards, say — the AI can learn the “language” of card text to produce more accurate results. Apparently, it does a decent job on Hearthstone, though it still stuffs up:
It handled the (relatively speaking) straightforward effect of the Madder Bomber fine, but the more specialised Preparation confused it. Which is fair enough — going by the number of times professional players have screwed up the cast order of Preparation, we can forgive DeepMind for getting it wrong.
The researchers mention that the reason “Madder Bomber” was treated correctly was because it had “captured” the difference between a similar card:
The “Madder Bomber” card is generated correctly as there is a similar card “Mad Bomber” in the training set, which implements the same effect, except that it deals 3 damage instead of 6. Yet, it is a promising result that the model was able to capture this difference.
Yep, it’s getting the hang of it all right. At this stage I’d be worried about DeepMind getting addicted toHearthstone and blowing all of Google’s cash on packs. That would be a lot of packs.
This story originally appeared on Kotaku Australia.