Considerations for a Space Opera Setting: Artificial Intelligence

23:48 Sun 16 Oct 2011
[, , , , , , ]

The effect of AI on a setting is similar to the effect of sentient alien beings, in that it helps to define the limits of “humanity”. By AI here I mean strong AI, the ability to create sentient machines, and particularly sentient machines of vastly greater intelligence than humans.

While it’s certainly possible to include AI created by non-human civilizations, that’s really the realm of “sentient aliens” rather than what I have in mind here, which is strong AI created by the human race. The interplay/tension between those two groups is critical a lot of space opera, e.g. Iain M. Banks’ Culture series and Dan Simmons’ Hyperion Cantos—not to mention Battlestar Galactica and critical aspects of the background of the Dune setting.

Other classic space opera settings ignore it or treat it inconsistently: it’s present in the mainstream Star Wars works only in terms of individual sentient beings (e.g. C-3PO), for example, but there’s no exploration of the question of what that ability to create sentient beings on that scale would mean for creating them at a much larger scale.

The most compelling treatments are those that acknowledge the tremendous changes that would occur if we could create these intelligences, and the extent to which strong AI would come to dominate society. The previously-mentioned Culture and Hyperion works are among my favorites.

That being said, for this setting I’m inclined to not follow that path, because the relationship between AI and the race that created it isn’t something I want to explore. Their advanced capabilities make them seem like djinns or gods, incomprehensible forces operating at a much higher level of understanding than humans can manage. As fun as that is to play with, I don’t want this work to be about humans as lesser adjuncts to their creations, nor do I want the ultimate decision-makers to be less-fallible machines—I want it to be about humans, about flawed human decision-making, and the struggles they have while trying to outwit each other.

The question of whether strong AI is possible, and when it might arrive, is deeply controversial. For this setting, I have to be on the conservative side if I assert that there’s been no huge shift after several thousand more years of work. In this setting, that’s what’s happened—despite colossal advances in processing power, it still hasn’t been possible to create sentient beings out of hardware. The setting doesn’t take a position that it’s ultimately impossible, just that it’s still too complicated for the technology available. It could be that strong AI is only a couple of decades away—as on contemporary Earth where it’s been 20 years away from quite some time, and will remain so for the foreseeable future.

However, I find it difficult to believe that no advances have been made at all in related fields in that time, and in my setting artificially-created biological intelligence is possible. It’s not the godlike AI allowed by digital processing, but rather breakthroughs in the interfaces between specially-bred lifeforms and digital devices that lead to the possibility of different, specialized intelligences that can be manufactured. These “bioAIs”, much less impressive than their digital counterparts but more effective due to not being entirely theoretical, take over many tasks where human decision-making is provably weaker—such as starship combat, where the large distances involve cover up the importance of reaction time. Thus, combat ships (and many others) have bioAI to run specific combat tasks, guided at a much slower level by human commanders. These bioAIs have sentience and language, but cannot override their human commanders except in extreme cases, and are kept narrowly-focused to prevent them from trying to tackle problems outside of the domains they were created to handle.

This allows for some interesting situations and personality, while not opening the Pandora’s Box of pure hardware strong AI.

Leave a Reply