Androids in Alien and Prometheus

23:55 Sun 24 Jun 2012. Updated: 01:25 25 Jun 2012
[, , , ]

(Spoilers for Alien and Prometheus, clearly.)

What is an android? The first Wiktionary definition is “a robot that is designed to look and act like a human (usually male)”. Looking like a human is the easier of the two components, particularly when not in motion, despite the potential difficulties in artificially replicating skin in a convincing manner. The real difficulty is acting like a human. Our stories are full of creatures (döppelgangers, aliens, golems[1]) that look like use but are not us, and this familiarity with the concept may mask how difficult accomplishing such a thing would be—an oversight that forms a core weakness in Prometheus.

In order to “act human”, some rather impressive artificial intelligence must be involved. We don’t even have chat routines that can regularly fool people in a text-only environment[2], never mind seem even vaguely human in face-to-face interaction. We’re significantly complicated beings, and creating facsimiles of ourselves is tremendously difficult—much more difficult than most science fiction seems to assume.

One of the reasons for this is that it’s a fascinating notion, the concept of a “like us but not us” being. Such a being, just by existing, brings into focus science fiction’s key question, “what does it mean to be human?” and as such can be irresistible to storytellers, regardless of the actual difficulty involved.

Prometheus is set in 2093, and presents an extremely aggressive timeline for technological progress from now until then: if we match it, then at that point we’ll have faster-than-light travel, stasis sleep, and artificial intelligence/androids. My personal bet would be that we won’t have any of these by then (and two of the three might simply be impossible). The first two, however, are relatively simple things to add to stories. It’s not hard to see the parameters of each, even if the details aren’t well understood. It’s better if such technologies aren’t able to do much outside of their intended purpose, otherwise writing problems in the vein of Star Trek arise[3].

Androids, however, are different. It’s extremely difficult to grasp their parameters in any way, and further it would be similarly hard for the other characters to do so. Any of us, presented with an FTL drive or a stasis sleep chamber, would be able (with some questions) to accept them as technological tools and artifacts. It’s not at all clear that this would be true for an android. We would be dealing with a creature that we would naturally try to deal with as if it were human, while being acutely aware that it’s not, and a large number of legitimate fears would come with that.

Apart from not knowing what its physical capabilities are, we would be extremely curious and nervous about its morality. We don’t understand morality that well, and none of the popular models to explain why people act morally (from “fear of God and/or an unpleasant afterlife” to “primate tribal cooperations genetics”) seem likely to convincingly apply to an android. Nor would we be convinced that an android would fear “death”, and hence have various brakes on its actions that we would regard as natural. We would regard it as inherently unpredictable—as, indeed, it would be to any except its programmers[4].

Given all of this, it’s highly problematic for a putatively “hard” science fiction film like Prometheus to place an android at the center of its plot.

The first problem is with how it shows the others relating to the android, as for any of those interactions to be realistic, the characters must have a far greater familiarity with androids than is suggested by the film—and since David acts in ways that are utterly unforeseen by them, it’s clear that their assumptions are false, yet somehow they don’t ever seem to fear that this might the case. The only way this could work is if the “David” model was in fact extremely common, predictable, and reliable, but that this particular David was actually some kind of experimental model with vastly greater capacities—something which would need to be called out at some point in the movie[5].

The second problem is that without parameters known to the audience guiding android behavior and capabilities, the writers are free to use the android as a way to make whatever plot maneuvers they like, without any real constraints. Which is indeed what happens in the film, as David acts in bizarre ways that seem to serve no purpose other than bringing the movie where the writers want it to go.

Alien is set about 30 years later, in 2122, and also features an android whose actions are central to the plot. However, Ash is far more believable, partly because he’s not known to be an android by the rest of the crew[6], something that’s in some ways easier to pull off than being convincingly human when you’re known to be an android. Further, his actions have a very clear motive behind them: the company has instructed him to bring back an alien sample no matter what. There’s no question of his independently deciding on that course of action, no faux-Oedipal complexes somehow influencing his programming, no signs of mysterious ego development. He’s given an instruction, and proceeds to do what he can—because he has no choice but to obey the instruction, something that does seem quite plausible for an android.

Ash does a number of things that allow the alien access to the ship—things that are clearly not good decisions for the crew[7], and could be seen as “stupid”, but which fit with his actual motives. His actions, particularly in letting the three crew back aboard the ship, are critical to the alien’s journey. David does similar things, but from a different point of view. We don’t know about his character, no motives are ever made clear, and we have no clue why he did what he did, from opening the barricade to taking the black liquid to infecting Charlie, even though those things are totally critical to what happens later. He appears to know more than he reveals—just like Ash—but unlike with Ash we’re left wondering what the meaning and motives behind his keeping that knowledge hidden are.

[1] A functional golem seems like it would meet the criteria for classification as an android.

[2] Although some might point to the prevalence and apparent success of online chatbots promoting various sex-related services and disagree…

[3] E.g. “engage the FTL drive at some bizarre setting to project its effect at some distance, thus using it as a weapon”, “change the frequency on the flux capacitor to send us 10 minutes back in time”, or “put the bomb in the stasis chamber and then remove the chamber from the ship at leisure”, or various other solutions for the protagonists that the audience can’t really see coming and which the writers can lazily use by relying on the fact that we lack the ability to point out that they’re impossible in the setting, precisely because the parameters haven’t been laid out.

[4] And even they would have trouble, given the complexity necessarily present in its hardware and software.

[5] If this was the case, though, the very first sign of anything out of the ordinary should have prompted a lot of scrutiny from the humans.

[6] And presumably shows up on ship with fake papers and background provided by the company.

[7] Another contrast with David is that Ash does appear conflicted about what he’s doing—the “freakout” he has while trying to kill Ripley directly, and the fact that he clearly doesn’t take the most efficient steps towards killing the crew, suggest that at the least his orders to bring the alien home regardless of the fate of the crew are clashing with his other directives, and perhaps that he’s struggling as a sentient being with orders he must follow but does not want to. These struggles cause his failure, and his speech about admiring the alien should be seen in this light: since he has failed and been reduced to a head separated from its body, it makes sense that it would admire the alien and its single-mindedness at that particular moment. The “delusions of morality” he speaks of could be morality in general, or his own “delusions”, given that he’s not human and they’re programmed into him, thus making them delusions indeed, and in either case he could not help but see them as inefficient obstructions at that particular point.

Leave a Reply