Author Index

Other reviews by
Ed Owens


Contact Us


































































































I, Asimov
by Ed Owens

In his essay The Robot Chronicles, Isaac Asimov traces the evolution of robots in fiction, beginning with The Iliad (referring to Hephaistus (sic) assistants: “...a couple of maids...made of gold...”) and moving through Rabbi Loew's Golem (who is given life when the Golem's creator invokes the sacred name of God). The point is to illustrate the way in which artificial humans have inevitably been tied to the powers of Deity. In this schematic, Frankenstein becomes the ultimate example of literature's tendency to couch such narratives as cautionary tales, moral fables that condemn man's attempts at playing God.

Certainly, Asimov found such religious foundations both superstitious and simplistic, ignoring the more nuanced possibilities in favor of straightforward allegories. In Asimov's own words, “Only one robot-plot seemed available to the average author: the mechanical man that proved to be a menace, the creature that turned against its creator, the robot that became a threat to humanity. And almost all stories of this sort were heavily surcharged, either explicitly or implicitly, with the weary moral that ‘there are some things that mankind must never seek to learn'...My own viewpoint was that robots were story material, not as blasphemous imitations of life, but merely as advanced machines” (from the essay Robots I Have Known ). Asimov is even more explicit in My Robots : “...I was determined not to make my robots symbols. They were not to be symbols of man's overweening arrogance. They were not to be examples of human ambitions trespassing on the domain of the Almighty.”

Asimov's objection is not with the notion of robot as potential villain, but with the simplistic religious context in which the device had been used. In fact, Asimov had his own concerns about the potential uses of such machines: he begins his essay, The Laws of Robotics, with the question, “It isn't easy to think about computers without wondering if they will ever ‘take over.'” Iin The Robot as Enemy, Asimov discusses the potential problems of using machines without adequate safeguards, pondering the potential problems with security robots with questions such as, “What would happen...when the chairman of the board found he had left his identification card in his other pants and was too upset to leave the building fast enough to suit the robot? Or what if a child wandered into the building without the proper clearance?” Asimov thus concluded that there would need to be safety factors, or safeguards, to protect humans. He said, “The safety factors might be faulty or inadequate or might fail under unexpected types of stresses; but such failures could always yield experience that could be used to improve the models.” (The Robot Chronicles)

It was these concerns that led to his introduction of the Three Laws of Robotics in Runaround. But even then, his mind was already occupied by the potential conflict between the three laws (the story is about a mining robot on the planet Venus who becomes trapped in an endless loop brought about as a conflict between the second and third laws, a loop which can only be broken when one of the miners puts himself directly in harm's way in order to break the cycle). In fact, two stories later, Asimov introduced his first robotic "villain," in Little Lost Robot. In the introduction to Robot Visions, Asimov says of the story, “My robots tend to be benign entities...Nevertheless, I had no intention of limiting myself to robots as saviors...The seamy side of robots...has been a constant concern of mine all through my career.” Perhaps the ultimate expression of the “unexpected stresses” coupled with the role of robots as “intelligent machines” comes in Robots and Empire, which introduces the question of the collective good vs. harm to the individual (those who have seen the film I, Robot will recognize this logical conundrum in the reasoning of VIKI). Asimov even went so far as to dub this logical dilemma the “Zeroth Law of Robotics.”

The film's climactic “revolution” is neither malignant nor an example of creations turning on their creators. VIKI explicitly states that the first law, when carried to its logical extreme, all but requires that robotic intelligence assume control, thereby protecting humanity from its own inhumanity. As mentioned above, Asimov himself dealt with this very issue in both his fictional and non-fictional writings (in yet another essay, Asimov himself makes the very same argument). The health and safety of the individual is secondary to the health and safety of the collective. This is very much in line with Asimov's views as previously outlined. While the exact meaning of the film's final moments are certainly open to interpretation, the final image suggests not the regression of man into a pre-robotic state, but the sort of learning experience that Asimov himself lays out.

Asimov's vision was not without bumps in the road, not without problems with the process, not an idyllic Eden where humans and robots lived together in perfect harmony (an allusion I'm sure he would have hated), but of a world where mechanical malfunctions were not a divine punishment bestowed upon humanity for endeavoring to build a technological Tower of Babel, not a sign that humanity had “trespassed on the domain of the Almighty.” They were but a natural step in the evolution of intelligence, both robotic and human, that would “yield experience that could be used to improve the models.” The climax of I, Robot is one of optimism and hope rather than fear, and that is Asimov at his core.

©2004 Ed Owens