AI and Games

18:14 Thu 26 Feb 2009
[, , , ]

I recently came across this article about an AI program winning two Traveller competitions in the early 80s. (This was naval space combat simulation with Traveller rules, “Trillion Credit Squadron”, not roleplaying.)

It’s an interesting read, partly because of advances since then—particularly in chess—and partly because of the lack of advances. Computers remain quite bad at some games, MTG and Go being two examples. The former is more interesting to me, primarily because I know it a lot better.

A game like Trillion Credit Squadron seems like it would be perfect for computers to tackle. The vast majority of the encounter is determined before it happens, at the fleet-building stage, and apparently the rules for determining which fleet would win are highly mechanistic. I think that it’s also an open-information game, at least once it begins.

Given the quite profound advances in computer technology since 1982, we could expect that if a computer could solve a relatively mechanistic but quite accounting-heavy game like Trillion Credit Squadron then, computers should be able to solve most games humans enjoy now. But that’s clearly not the case. Computers seem significantly worse at limited-information games, for one thing, and for another seem much worse at games where the interactions are significant, deep, and potentially complex.

MTG has a pre-game preparation aspect not dissimilar to Trillion Credit Squadron, and technically there may be fewer combinations available for deckbuilding in MTG. Why hasn’t someone, therefore, created software to build ideal decks for a given format? There’s certainly more money in MTG than there was in Traveller in 1982. I think the reason is that deck encounters—that is, actual games—are determined by play skill as much as by deck, and computers suck at playing the game, because of the sheer numbers of decisions to be made. For human players, the decisions to be made are fairly clearly divided between trivial (what mana to use is normally a trivial decision) and significant (when to take aggressive risks). Software has no such ability to divide the decisions up, and hence the decision space becomes cripplingly large. In order to simulate how decks would actually do against each other, one would first have to have software that could play the game, and that seems to be the really difficult problem.

I’m not sure what other games out there are like this (apart from other CCGs), but I’m curious.

Leave a Reply