In Philip K. Dick’s novel Do Androids Dream of Electric Sheep? the androids are the bioengineered mimesis of a human yet designed to lack empathy so as to serve more effectively as slaves. Drawing on the anthropological studies of Michael Tomasello, René Girard, and Eric Gans, this essay shows that mimesis as psychological identification is the root of empathy—humans imitate the attention of others to their joy and sorrow—and the novel identifies empathy as the defining characteristic of humans. But the androids are capable of imitating humans and becoming both independent and empathetic, just as infants imitate those around them in order to learn language and develop self-consciousness. Psychological mimesis blurs the boundary between self and other, which, in various forms, is the most common topos of Philip K. Dick’s novels. The androids’ status as doubles of the humans is uniquely motivated; they are a mirror that offers unrivaled insight into what it means to be human. The uncanny androids reflect the ambivalence of mimesis, which is the source of empathy but also rivalry leading to violence. The fear of the androids expresses the fear that humans are simply mechanical creatures controlled by the mimetic drive and the biological imperative to survive and reproduce.
Keywords: Mimesis, Philip K. Dick, Do Androids Dream of Electric Sheep?, Michael Tomasello, René Girard, Eric Gans, Anthropology, Simulation, Uncanny, Mimetic Theory, Science Fiction, Androids.
* * *
Assume a virtue, if you have it not. [. . .]
For use almost can change the stamp of nature.
William Shakespeare, Hamlet
Simulation, in all its various senses, defines and dominates Philip K. Dick’s vision for the future of our species. In his novel Do Androids Dream of Electric Sheep? the androids are literally mimetic doubles of humans. The protagonist, Rick Deckard, owns an electric sheep—a persuasive simulation of a real sheep. In some of Dick’s novels, reality itself is discovered to be a simulation—a technological simulation (e.g., A Maze of Death), a drug-induced alternative reality (e.g., Flow My Tears, the Policeman Said), or by other means. To a large extent, simulation is a typical science fiction theme that reflects humanity’s advanced technological abilities. We have artificial intelligence (A.I.), artificial reality (A.R.), artificial organs, and functional replications of almost anything that humans want or need; science fiction extrapolates from this development with attention to the unintended consequences. The human species is distinguished by its talent for simulation, going all the way back to the cave-paintings of our Paleolithic ancestors and the pantomimes found in ritual, dance, and theater. Only humans create representational art. As Aristotle noted, tragedy is the mimesis of an action. A good novel creates a convincing fictional world in our imagination. While mimesis as technological simulation has been discussed in the scholarship on Philip K. Dick, mimesis in the anthropological sense has been neglected. My thesis is that the interplay between different senses of mimesis is central to Do Androids Dream at every level: plot, theme, and aesthetic effect.
Jean Baudrillard’s theory of simulation can be contrasted to my approach. Baudrillard claims that “reality” is an idea with no application anymore because all we have are simulations of simulations. He has invoked Dick’s science fiction in support of his thesis. Both Baudrillard and Dick blur the line between reality and simulation. Baudrillard is fundamentally concerned with models of value during different time periods. For Baudrillard, the loss of reality as a touchstone of value is a historical development of postmodernity. But Dick’s androids belong to a long tradition of fictional doubles that is as old as myth. Do Androids Dream expresses fears and hopes about our mimetic nature as cultural animals. In what follows, I’ll be drawing on developments in anthropology (Tomasello, Girard, and Gans) to support my reading of Do Androids Dream.
- Dialing a Mood
The novel begins with a debate about the Penfield mood organ, a machine that can create seemingly any desired mood for its users, even whimsically specific moods such as the “desire to watch TV, no matter what is on it” (6). Rick Deckard uses the mood organ to wake up feeling positive and alert, while his wife Iran resists using it.
Humans are constantly manipulating their mood in various ways. We seek out situations and stimulations that make us happy or relieve boredom. But in the novel, as we’ll see, the practice of simulating a mood is problematic. What is special about moods that their mimesis is problematical? Is it simply the technology that is at stake here or is something more fundamental being expressed?
Strangely, Deckard’s wife Iran uses the mood organ to experience depression occasionally. She explains to her husband that she was watching TV one day when an advertisement she doesn’t like came on, “so for a minute I shut off the sound”:
“At that moment,” Iran said, “when I had the TV sound off, I was in a 382 mood; I had just dialed it. So although I heard the emptiness [of the surrounding apartments] intellectually, I didn’t feel it. My first reaction consisted in being grateful that we could afford a Penfield mood organ. But then I realized how unhealthy it was, sensing the absence of life, not just in this building but everywhere [a desolate, post-apocalypse Earth], and not reacting—do you see? I guess you don’t. But that used to be considered a sign of mental illness; they called it ‘absence of appropriate affect.’ So I left the TV sound off and I sat down at my mood organ and I experimented. And I finally found a setting for despair. . . . So I put it on the schedule for twice a month.” (5)
We learn later that “absence of appropriate affect” is a distinguishing feature of androids as well as mental illness (including perhaps despair).
The novel suggests that users of the mood organ actually experience the chosen mood. For the user, the mood is real. But creating one emotion means not feeling another emotion. The user can use her intellect to recognize that the mood does not have the usual connection to the external situation. An environmental stimulus can cancel out the selected mood; when his wife is hostile to him, Deckard “felt irritable now, although he hadn’t dialed for it” (3). Despite being in a “382 mood,” Iran still senses the “emptiness . . . [and] absence of life,” and so she chooses to dial in despair, which is more appropriate.
Feelings are evolutionarily adaptive, guiding our actions in response to events. An experience of happiness stimulates us to seek out the situation that produces this feeling. So depending on one’s perspective one can correctly identify a Penfield mood as a simulation, the mimesis of a feeling, since the mood appears suddenly, unnaturally, and without the usual environmental stimulation. When Deckard uses the mood organ to wake up, “it always surprised him to find himself awake without prior notice” (3).
Deckard and his wife go back and forth on her using the mood organ, with Deckard giving her suggestions about which mood to dial for. When she says, “I don’t feel like dialing anything at all now,” he says, “Then dial 3.” She replies,
I can’t dial a setting that stimulates my cerebral cortex into wanting to dial! If I don’t want to dial, I don’t want to dial that most of all, because then I will want to dial, and wanting to dial is right now the most alien drive I can imagine. I just want to sit here and stare at the floor. (6)
The mood organ has the capacity to split its user against herself. Wanting to dial a mood (which can be dialed for) would be “the most alien drive I can imagine.” Becoming alien to oneself is an important theme of the novel. Feelings are central to identity. But what happens when one’s feelings are not one’s own? Deckard says late in the novel that he has become alien to himself (212). He’s driven by external forces, especially his job. To some extent, Dick is commenting on the alienation endemic to a dystopian, post-apocalypse world (and by analogy, 1960’s U.S.).
It seems ironic that Iran dials in “despair” occasionally since that seems to be her usual response to her situation. The novel suggests that she is no longer capable of a “normal” mood; even the normal must be simulated. Iran’s resistance to mimesis is futile; technology has emptied out the distinction between normal and artificial. The same is true for Deckard. Before leaving home, he dials for “a creative and fresh attitude toward his job, although this he hardly needed; such was his habitual, innate approach without recourse to Penfield artificial brain stimulation” (7).
Iran finally gives up the debate and lets her husband dial for her, and he chooses “594: pleased acknowledgement of husband’s superior wisdom in all matters” (7). Later in the novel, he calls home and discovers that after he left she immediately canceled the mood he dialed for her, demonstrating the limits of the mood organ’s power (87). Individuals can choose to reject a mood imposed by technology. But the effect is not to show that individuals are really in control of their emotions. Choosing to be in despair, using technology to enforce the choice, shows nihilistic desperation rather than self-control.
The mood organ demonstrates how we rely on technology in perverse, counter-productive ways, in a world that seems beyond our control. The mood organ could work like an addictive drug, distracting users from productive action and positive change. The futuristic ability to simulate “a creative and fresh attitude” is an inadequate response to larger problems that are left unaddressed. One could make the same point about legal drugs that work on mood or the myriad electronic diversions of modern life. In the novel, the problem of mood (and its simulation) is symptomatic of technological modernity, but one could argue that such is the human condition. Blaise Pascal, in the seventeenth century, maintains that humans are essentially miserable apart from God, and we spend our lives distracting ourselves from this condition (106-114). One need not share Pascal’s religious perspective to agree that humans are constantly seeking diversion.
The problem of mood is not only a dystopian world and superficial technological remedies but actually part of our heritage as the only talking ape. Simulation is not limited to modern technology. Mimesis is central to our identity as humans. Aristotle observed long ago that humans are the most mimetic of animals. For Aristotle, our mimetic nature explains the delight we take in works of art that represent events or objects. He notes that we learn by imitation, and we enjoy learning new things (Aristotle 6-7). Aristotle was refuting Plato’s interpretation of mimesis as an inferior reproduction of the eternal “forms,” e.g., Beauty and Truth. For Plato, artistic mimesis leads us away from the Good and Justice, in contrast to the Socratic dialectic (elenchus). In Socrates’ examination of poetry and its role in society, he argued that it allows passion to overpower reason, which he believed is the only sure foundation of Justice. Philip K. Dick’s novel continues the debate about the role of mimesis and supports Aristotle’s observation that humans are the most mimetic of animals. We are incredibly adept at imitating each other and learning by imitation. More importantly, humans have invented culture: symbolic and (at the same time) mimetic representations, including poetry, paintings, and human figures, of which robots or androids are a variety. Children develop into adults through their facility at imitation, which is a virtually instinctual behavior for humans. Our primate cousins share this mimetic tendency, but humans take it to a new level. There is considerable evidence that our mimetic propensity is the root of the development of language and culture, which distinguish the human species. Our facility for mimesis is originally instinctual, and this explains why it is so powerful, even though, to some degree, it comes under conscious control with language and art.
Michael Tomasello has researched the development of language among babies and children, demonstrating its mimetic basis. Before nine months old, babies pay attention to objects or people, a dyadic relationship between self and the object of attention. But beginning around nine months, babies learn to pay attention to what others are paying attention to, which is a triadic relationship: self, other, and object of attention (Tomasello, Cultural Origins 62). If a child sees me pointing at something, she can look to see what I am pointing at, a behavior not found among chimpanzees (Tomasello, Becoming Human 95). Tomasello calls this “shared attention.” Language is based on a scene of shared attention. We use signs to direct the attention of others to things or people. We imitate the attention of another by the medium of language or gestures.
The overall conclusion is thus that during the period from one to three years old, young children are virtual “imitation machines” as they seek to appropriate the cultural skills and behaviors of the mature members of their social groups. (Tomasello, Cultural Origins 159)
The ability of humans to identify with others is essentially mimetic. The consequences of this ability are profound. We retain the sensual perceptions of our primate ancestors, but our attention to the world is mediated by others, specific individuals but also our shared language and culture. Language is fundamentally social, intersubjective, and mimetic. Tomasello writes, “the fundamental social-cognitive ability that underlies human culture is the individual human being’s ability to and tendency to identify with other human beings” (Cultural Origins 90). Human self-consciousness and freedom depend on our unique connection to others. There is an underlying truth to what the character J. R. Isidore says, “You have to be with other people, he thought. In order to live at all” (188). Psychological identification, as Tomasello describes it, is the condition of possibility for language. We may consider ourselves completely different from another person, but simply in order to understand what they say or do requires such identification.
Identification means that we understand other humans in terms of their intentions and thoughts. We reflexively anticipate what others are thinking and feeling, even to the extent that we sometimes attribute agency to inanimate objects, as with the religion of animism or by the literary trope of prosopopoeia. Simply reading a novel involves attributing thought and agency to fictional literary representations. The representation of a person—whether by words or image or sculptural figure—can be animated by our imagination, which is a private yet still (in a virtual sense) shared scene of attention. Readers, including professional critics, commonly discuss the psychology of literary characters. We discuss them as if they were real persons while reserving knowledge of the fiction. Sometimes literature takes such attribution as its subject matter. Cervantes gives us Don Quixote, who understands the romance literature he reads as a history of real persons. In E.T.A. Hoffmann’s short story, “The Sandman,” the protagonist mistakes a mechanically-animated doll for a real person. Ernst Jentsch sees the reader’s “uncertainty . . . [about] whether a particular figure in the story is a human being or an automaton” as the source of the uncanny (qtd. in Freud 227). When we unexpectedly encounter a dressed manikin we may experience such an uncanny feeling. The plot of Do Androids Dream is driven by precisely this ambiguity. The characters in the novel are uncertain whether the androids are really “alive.”
René Girard’s theory of mimesis is confirmed by Tomasello’s work. Girard argues that our desires (which he distinguishes from simple appetite) are mimetic, as when people want to buy a consumer good because a celebrity is shown using it. We imitate the desires of others, leading to conflicts over desired objects or persons (e.g., the classic love triangle). Insofar as we are cultural animals, desire is based on a shared scene of attention. Eric Gans, building on Girard’s work, hypothesizes that language originates on a collective scene of shared attention for an object of mimetic desire. Since the origin of language is a communal event, all language use is social, not only the actual exchanges between humans, but also virtually, in our memory, imagination, or interior monologue (which is really a dialogue with ourself). V.N. Voloshinov contends, “the speaking subject, taken from the inside, . . . turns out to be wholly a product of social interrelations. Not only external expression but also individual experience fall within social territory.”
It follows from the above that our moods, emotions, and desires are not autonomous but rather social and cultural, rooted in a shared scene of attention. We all know that moods, like laughter, are contagious. The work of Tomasello and Girard suggests that the contagious nature of moods is not contingent and accidental but essential to their power for humans. Moods and desires are notoriously capricious and wayward, and this follows from their mimetic basis.
The moods produced by the Penfield mood organ, therefore, are mimetic in two senses. On one level, they are the reproduction of a mood by technology; but on another level, moods as such are inhabited by mimesis as identification. The dispute about the mood organ reflects a fear that we are not as independent as we might wish, that we remain subject to larger forces beyond our control, despite the seeming autonomy offered by technology.
Our feelings are very dear to us, central to our sense of self. Iran’s resistance to the mood organ expresses the desire to retain her identity in the face of external forces, forces that are to some extent modern and technological. Modern humans have unparalleled opportunities (and therefore pressures) to modify or stimulate our mood, using technology or by other means. But at the same time, moods are a problem that is typical for the human species because our experience of the world is mimetically conditioned by others.
- Androids and Animals
The opening dialogue between Deckard and his wife illustrates some possible attitudes toward the androids. Iran, irritable at being woken up, calls her husband a “murderer hired by the cops” (3). Deckard points out, “I’ve never killed a human being in my life,” to which Iran replies, “Just those poor andys” (4). Deckard’s job hunting androids apparently has a negative effect on his relationship with his wife, not unlike how the animated doll in Hoffman’s “The Sandman” is the occasion for conflict among the characters.
Iran seems sympathetic toward the androids while Deckard claims they are not human. We don’t know how common Iran’s attitude is on earth, but it expresses at least one possible reaction. Having empathy is supposedly what distinguishes humans from androids. Feeling sorry for the “poor andys” is not exactly empathy, but it’s close. One of the questions in the novel is whether it is appropriate for humans to have empathy for androids, who are ostensibly not human.
Who or what exactly are the androids? Are they living creatures or not? Are they capable of empathy or not? J.R. Isidore, the radiation-contaminated “special” with impaired mental faculties, observes later in the novel, speaking to an android, “Actually you’re not alive,” and he is the human most sympathetic to androids in the whole novel (150). The android Rachael says directly, “I’m not alive!” (178). If the androids are not alive, then there is no reason to have sympathy for them. But readers soon find out that androids appear alive and human by almost any standard. Having empathy for animals, which are alive but not human, is considered by everyone on earth as a most important virtue. In terms of plot, empathy for androids is problematic because they are illegal on earth, and especially so for a bounty hunter since he has to kill them, but the prohibition of androids on earth is never really questioned much less justified within the novel. The androids are the doubles of the humans in the story; they are a mirror that raises questions about what it means to be human.
In order to understand the androids we must compare them with the humans in the novel. Deckard, for example, is apparently more concerned with owning a live animal than he is with his wife and marriage. Late in the novel, Rachael observes that Deckard loves his goat “More than you love your wife, probably” (185). Early in the novel, however, Deckard doesn’t have a goat. All he has is an electric sheep that he keeps on the roof of his apartment building. His neighbor has a rare and valuable Percheron horse, which happens to be pregnant, making it even more rare and valuable. Deckard is irritated when his neighbor boasts about his horse while he is forced to make do with an electric sheep.
Owning a live animal is a mark of status as well as moral probity. The religion of Mercerism emphasizes the virtue of nurturing live animals. Deckard and his neighbor pay lip service to Mercerism, but it’s obvious that their devotion to live animals is not motivated primarily by religion. Everyone on earth places great value on any living creature, from ostriches to insects, because they are so rare. The larger and rarer the animal is, the more value is attached.
Deckard expresses resentment at his neighbor having two horses, and his neighbor responds, “You have your sheep,” not knowing that the sheep is electric (10). When Deckard goes over to the sheep and exposes the control panel hidden in the fur, his neighbor says, “You poor guy. Has it always been like this?” (11). Deckard explains that he once had a live sheep but it contracted tetanus and died after getting scratched by a stray wire left on a bale of hay. His sorrow is not only losing his sheep but also the shame of letting it die through his negligence. He purchases an electric replica of his sheep as consolation and so that no one will know what happened. He says,
“It’s a premium job. And I’ve put as much time and attention into caring for it as I did when it was real. But — ” He shrugged.
“It’s not the same,” Barbour [his neighbor] finished. (12)
His neighbor suggests he get a live cat, which are relatively inexpensive. But Deckard responds, “I don’t want a domestic pet. I want what I originally had, a large animal” like a sheep or horse (13).
Deckard and his neighbor’s obsession with live animals seems comparable to the passion for muscle cars in 1970’s America or the imperative for the latest computing device today. Jill Galvan writes that live animals in the novel are “fetishized” as status symbols signifying the owner’s supposed empathy (415, 424). But the fact that eighty-five million families in America (more than two thirds of American households) own a pet today suggests that owning a live animal has a deep meaning for many people. It’s intuitively obvious that nurturing a living creature is more rewarding and meaningful than maintaining an electric animal, no matter how sophisticated. There seems to be a fundamental human drive to care for an animal or child, for both men and women. But due to radiation, more and more of the few remaining people on Earth have become sterile. To some extent, then, we could say that the value placed on living animals in the novel expresses a displaced biological drive to reproduce. And clearly there is status involved; individuals, males in the novel, compete for the best animals. Certainly the rarity of live animals on Earth increases their value. But beyond all this, the people in the novel recognize that there is indeed something special about life, as any survey of the other planets in our solar system will confirm. The novel affirms the value of natural life, notwithstanding the questions raised by the existence of biological androids. There is something very touching about the scene late in the novel when Deckard finds what he thinks is a live toad, “The critter most precious to Wilbur Mercer” (217) and thought to be extinct. Despite Buster Friendly’s exposé, the novel takes Mercerism and the sanctity of life quite seriously. The epigraph to the novel is a Reuters news release about a 200-year-old turtle (actually a tortoise) regarded as a chief by the native people of Tonga Island.
The value placed on living animals acts as a kind of foundation for the speculative explorations of the rest of the novel. Do Androids Dream is not purely or simply deconstructive, although it does interrogate the binary oppositions that drive the plot, and it does call into question conventional definitions of the human. Some questions may be undecidable, but the value of life is a place on which we can stand. The existence of simulated life, however, in the case of biological androids, raises the question of what exactly constitutes life.
- Humans and Empathy
The importance given to living animals in the novel implies that androids are in fact significantly different from humans, at least on one level. They are not born, do not grow up, and are incapable of reproduction (177). The androids, we find out, have no instinctive empathy for animals.
On the other hand, Deckard’s character calls into question the distinction of the human species. We’ve noted that he’s more concerned with owning a live animal than with his wife and marriage. When he arrives at his job, he is “chilled” to learn that his colleague, Dave Holden, the lead bounty hunter, is in the hospital “with a laser track through his spine” after an encounter with an escaped android (26). But Deckard and his boss are only concerned with the impact upon their work. Deckard realizes that his colleague’s incapacity means that he will have more opportunity for earning bounties on renegade androids:
He felt depressed. And yet, logically, because of Dave’s sudden disappearance from the work scene, he should be at least guardedly pleased. (33)
During a conference with his boss, he asks for Holden’s notes on the escaped androids. His boss, Harry Bryant, declines to give him the notes, saying,
“Let’s wait until you’ve tried out your [Voight-Kampff] scale in Seattle.” His tone was interestingly merciless, and Rick Deckard noted it. (37)
There’s no obvious reason why Bryant should withhold the notes without mercy. Deckard needs the information not only to do his job, to help him hunt androids, but also to protect his life, while Bryant apparently has other concerns. There are many examples in the novel when humans are pointedly represented as cold-blooded.
Ironically, while live animals are a widespread obsession on earth, radiation-damaged humans like John Isidore are universally despised. One of the main questions of the novel is how humans compare to androids in terms of their empathy and humanity.
Because the androids are biological creatures that look and act exactly like humans, identifying them presents a problem for a bounty hunter like Deckard. This problem is worsened by the introduction of a new, more advanced android called the Nexus-6. As an incentive for emigration, androids are allowed only on the off-earth colonies, where they work as servants, but occasionally they kill their masters, escape, and come to earth. Bounty hunters like Deckard are employed to hunt down escaped androids. Before killing them, however, bounty hunters are required by law to test the androids to establish their identity beyond any doubt, in order to prevent the inadvertent murder of a human. For this purpose, they use a test called the Voight-Kampff that measures a subject’s response to the verbal presentation of an emotionally provocative situation. When he tests Rachael, he shines a “small beam of white light” into her left eye, and he places a “wire-mesh disk” on her cheek with a wire going to his test machine. The test measures her “eye-muscle and capillary reaction” to his questions. Her “verbal responses won’t count” (46). The idea here is that androids are able to learn how humans typically respond to certain emotional situations, and therefore they can easily simulate an appropriate response using their intellect. But humans will respond more immediately and instinctually. The test measures the response of the autonomic nervous system, which cannot be directly and immediately controlled. A lie detector or polygraph works on exactly the same principle. The difference is that a lie detector is looking for a reaction, while the Voight-Kampff is looking for the lack of a reaction. The plot of the novel is motivated in part by some problems with the test, both practical and logical.
The novel presents us with the thesis that humans are defined by their empathy for living creatures and horror at any violation of the sanctity of life. Now that robots have advanced to the point that they equal or surpass humans in intelligence, this is the only distinguishing factor left. Deckard thinks,
An android, no matter how gifted as to pure intellectual capacity, could make no sense out of the fusion which took place routinely among the followers of Mercerism – an experience which he, and virtually everyone else, including subnormal chickenheads, managed with no difficulty. (29)
Fusion in Mercerism is aided by technology, the empathy box, and it involves the sympathetic communion with all other humans across the solar system using an empathy box at that point in time. The use of technology complicates the issue of empathy in fusion. But fusion, even aided by technology, depends upon a prior faculty for empathy. Later in the novel, Deckard experiences fusion with Mercer without the aid of an empathy box.
Early in the novel, Deckard reflects upon the basis for empathy: “He had wondered, as had most people at one time or another, precisely why an android bounced helplessly about when confronted by an empathy-measuring test” (29). He thinks, “Empathy, evidently, existed only within the human community, whereas intelligence to some degree could be found throughout every phylum and order including the arachnids” (29). He reasons that humans’ special faculty for empathy must have an evolutionary basis: only herd animals, “herbivores or anyhow omnivores who could depart from a meat diet” would find empathy adaptive, while predator species would find it maladaptive (29). Empathy, Deckard believes,
would make [a predator] conscious of the desire to live on the part of the prey. Hence all predators, even highly developed mammals such as cats, would starve. . . . the empathic gift blurred the boundaries between hunter and victim, between the successful and the defeated. (29)
For humans, empathy would be adaptive, presumably, because as social animals humans depend upon other humans to survive. The downside, Deckard observes, is that the stronger animals would be dragged down to some degree with the weaker ones. But the species as a whole would benefit. He calls empathy “a sort of biological insurance, but double-edged” (29-30).
There are some problems with Deckard’s reflections here. He starts with the premise that only humans have empathy. But his speculations about the evolutionary basis of empathy suggest that all social animals should have some degree of empathy. Deckard’s distinction between predators and social species is also weak. Many predators like wolves are social species if not “herd animals.” Deckard thinks, “Evidently the humanoid robot constituted a solitary predator” (30). But we soon learn that androids appear to be just as social as humans. Pris says that androids get lonely (139). At least one human, the bounty hunter Phil Resch, fits the description of a “solitary predator” better than any of the androids. And Deckard, in his job at least, is also a “solitary predator,” despite his revulsion for Resch. In fact, Deckard goes on to undercut his description of androids as solitary predators: “Rick liked to think of them that way [as solitary predators]; it made his job palatable” (30). In other words, Deckard rationalizes his work simply to make it “palatable” to his conscience.
In retiring — i.e., killing — an andy he did not violate the rule of life laid down by Mercer. You shall kill only the killers, Mercer had told them the year empathy boxes first appeared on Earth. And in Mercerism, as it evolved into a full theology, the concept of The Killers had grown insidiously. In Mercerism, an absolute evil plucked at the threadbare cloak of the tottering, ascending old man, but it was never clear who or what this evil presence was. A Mercerite sensed evil without understanding it. Put another way, a Mercerite was free to locate the nebulous presence of The Killers wherever he saw fit. For Rick Deckard an escaped humanoid robot, which had killed its master, which had been equipped with an intelligence greater than that of many human beings, which had no regard for animals, which possessed no ability to feel emphatic joy for another life form’s success or grief at its defeat — that, for him, epitomized The Killers. (30)
The last sentence in this passage seems like a strong justification for killing the escaped androids. But Deckard undermines his own argument by noting that “it was never clear who or what this evil presence was” in Mercerism, and “a Mercerite was free to locate the nebulous presence of The Killers wherever he saw fit.” This line of reasoning suggests that Mercerites (including Deckard) could very possibly project evil onto an agent based on nothing more than a vague intuition, one that could be based on self-interest or prejudice rather than justice.
Empathy is a form of mimesis or identification, as Tomasello defines it. Empathy involves imitating the attention of another to their joy or sorrow, feeling what another is feeling and having sympathy for them. “Feeling what another is feeling” is a figure of speech, of course, a metaphor that describes a real experience. We cannot actually feel what another feels; we can only imagine what another feels based on common experiences. Dick makes the metaphor literal (a common technique of the fantastic genre) by having people inhabit or invade each other’s consciousness. Empathy, therefore, is only one expression of what is arguably the most common topos in Dick’s fiction: the idea that our consciousness can be inhabited by an other or others, that the boundary between self and other is radically permeable. This topos can take a variety of forms depending on which novel we are discussing. In some novels, individuals merge their minds with several other humans, as happens in Mercerist fusion using the empathy box. We’ve seen how the Penfield mood organ can be understood as a figure for how moods are shared or imposed by identification. Sometimes the merging of consciousness with others may be more or less horrific, depending on the context. Late in the novel, Deckard finds himself “permanently fused with” Wilbur Mercer, and he “can’t unfuse” (215)—not a comfortable feeling. In Mercerism the experience of fusion involves suffering emotional and physical pain, although the fact that it is (usually) shared and voluntary makes all the difference. Iran comments, “Physical pain but spiritually together” (159). In Mercerist fusion, the individual does not lose their individual consciousness—they are aware of other individuals as such—but they become subject to larger forces, for better or ill. The individual shares consciousness with others in fusion, and while they collectively “ascend” they are subject to attack by evil forces. The physical wounds they suffer may seem like an anomalous element given the techno-realism of the novel, but they are comparable to (and derived from) the historical phenomenon of medieval saints suffering the stigmata of Christ.
Another variation of what I’m describing is when an individual’s consciousness is invaded by another individual. In The Three Stigmata of Palmer Eldritch, individuals who take the drug Chew-Z find their consciousness inhabited by Palmer Eldritch, who may or may not be an alien from interstellar deep space. After one takes Chew-Z, he invades their mind (and perception of the world) in more or less obvious ways, and he does not leave, even after the drug seems to wear off. His identity and purpose are unclear, but his ability to dominate one’s subjective reality is threatening. The invasion of consciousness may result in the complete rearrangement of familiar reality, as in Flow my Tears, The Policeman Said, when the world-famous celebrity Jason Taverner wakes up to find himself in a world in which he is completely unknown, with no record of his previous existence. In Dick’s novels, our anthropological specialty in sharing attention (“the empathic gift”) means that “reality” is thoroughly fluid and variable. In Do Androids Dream, an android may find that their memories are not their own—they may be simulated memories implanted by the Rosen Association. This is an especially threatening form of invasion, since one’s identity is defined by memories, and one could be an android without even knowing it.
Even without implanted memories, an android’s consciousness is not fully their own. As a technologically (and biologically) programmed creature, an android does not develop from a baby into an independent adult but is established by the design of the Rosen Association. As it happens, however, the Rosen Association’s ability to determine the behavior of their androids is limited. An android can develop independence and reject its programming; Rachael admits that there are a “miniscule number of Nexus-6s who balk—” (42). Significantly, one can see humans in the same terms, as governed by our biological programming, our DNA, which is never chosen but determined by the strict and impersonal principles of evolution. Chance plays a role also in the transmission of DNA.
In contrast to these fantastical examples from Dick’s novels, empathy is a form of shared consciousness with which we are all familiar. But empathy is not always limited to its benign meaning. Feeling what another feels is not an unambiguous blessing. We are sensitive not only to the joys and sufferings of others but also their envy and hostility. Although this sensitivity may not be pleasant it is adaptive, helping us to anticipate the behavior of others, assuming we don’t misunderstand their intentions. The rampant paranoia in Dick’s novels reflects humans’ sometimes misplaced sensitivity to others’ thoughts and intentions.
Insofar as empathy is based on identification, it’s true that humans are distinguished by the capacity for empathy. The work of Tomasello suggests that identification is cultural and learned rather than purely instinctual. Gans would agree that shared attention is cultural but views this ability as a radical transformation, an overcoming or aufgehoben of a mimetic instinct shared by other primates. Ultimately, identification is rooted in mimetic rivalry.
Humans often do feel what others are feeling, including (non-human) animals. We feel empathy for animals, even though it’s probably adaptive only in the case of domestic animals. Animal rights activists insist that suffering is suffering, no matter what animal is feeling it, and the golden rule demands that we do all we can to minimize the suffering of all living creatures, or at least those under human care. There’s no doubt that many people suffer when they see an animal suffer. Friedrich Nietzsche broke down in tears at the sight of a horse being flogged, ran over and threw his arms around the beast’s neck to protect it before collapsing on the street. The ability to feel what another is feeling is rooted in our mimetic instinct, which is both precultural and the continuing basis of culture. Not all humans suffer at the sight of an animal suffering, but we do see this behavior sometimes among children, especially sensitive and imaginative children. But we can safely assume that a child raised in a tribe that relies on hunting for food would not likely be distressed at the sight of their food source dying, say a caribou, bison, rabbit, or fish. Human empathy for another person’s suffering (especially a child’s suffering) has an instinctual element but is equally cultural.
We know that babies are born with an instinct to imitate those around them; they mimic the facial expressions of adults only hours after birth. But the ability for shared attention (hence empathy) emerges only when a baby reaches about nine months old, and this ability develops in depth and sophistication as children acquire facility with language. So while empathy has an evolutionary heritage that goes back millions of years, it undergoes an important transformation when an individual develops self-awareness, emotion regulation processing, and other cognitive abilities associated with language and culture. Empathy as it is expressed in the human species is largely learned and cultural; if so, then the use of the Voight-Kampff test to identify androids, and the very distinction between androids and humans in the novel, become problematic, as we will see.
- Testing the Androids
Deckard’s visit to the Rosen Association is his first direct contact with the Nexus-6 android, the latest and most sophisticated version. Harry Bryant reportedly wants androids with the “Nexus-6 brain unit” withdrawn from the market—he has asked the Russian police agency to join him in filing a written complaint (27). A more intelligent android has obvious advantages for colonists, but when the androids escape and come back to earth, the police are responsible for identifying and eliminating them. Bryant and the bounty hunter Holden are both concerned that the Nexus-6 might be able to escape detection by the Voight-Kampff test. Deckard, however, points out that the fact that Polokov lasered Holden when Holden was giving him the test should alleviate any concern, since if Polokov could pass the test, he would have no reason for trying to kill Holden. But Bryant is also concerned that schizoid individuals might not be able to pass the test because of a “flattening of affect,” a “diminished empathic faculty,” which is “specifically what the scale measures” (36). Because a schizoid person would almost certainly be institutionalized, Deckard considers it “A million to one odds” that he would ever have occasion to test such an individual (37). In any case, Bryant has asked the Rosen Association to include several humans, as a control group, along with their new androids for testing.
When Deckard arrives in Seattle and lands on the roof of the Rosen Association building, he is greeted by Rachael Rosen, who is not pleased to see him. She is worried that if the Voight-Kampff test is unable to detect the androids, “we’ll have to withdraw all Nexus-6 types from the market. . . . Because you police departments can’t do an adequate job in the simple matter of detecting the minuscule number of Nexus-6s who balk—” (42). If the Nexus-6 can pass the test, or if a human fails to pass, Deckard thinks to himself, “I can probably force them to abandon manufacture of their Nexus-6 types” (42-3, emphasis in original), which would threaten the viability of the Rosen Association as well as the colonization effort. When they arrive inside the building, Rachael unexpectedly requests that she be tested first. Eldon Rosen says they have “selected her as your first subject. She may be an android. We’re hoping you can tell” (45).
The first question begins, “you are given a calf-skin wallet on your birthday.” The apparatus shows her reacting immediately, and she says, “I wouldn’t accept it. . . . Also I’d report the person who gave it to me to the police” (46). The other questions involve a bearskin rug, a boy’s butterfly collection, a mounted stag head, boiling a live lobster, finding a wasp crawling on her skin, having an abortion, a bullfight poster, and eating a raw oyster or a boiled dog. In at least one case, the calfskin wallet, Rachael responds immediately, a typical human response. But in some scenarios, she doesn’t react as strongly as would a human. And in other cases, she fails to note the offending element at all. Deckard concludes that she is an android but the Rosens deny it. They claim to have invalidated the test and threaten to reveal publicly this information, which would embarrass the police agencies that rely on it. But Deckard shows her his leather-covered briefcase and claims it is “One hundred percent genuine human babyhide,” to which she responds appropriately but not immediately, with a fraction of a second delay (56). Deckard is satisfied that she is an android, and the rest of the novel confirms his judgment.
There are several questions raised by this scene. Deckard emphasizes that time is a factor. A delayed response indicates an android. But if androids don’t have empathy, why would they have any autonomic response, even a delayed one? Certainly they can recognize a taboo and respond externally as they know is appropriate, but why should they have any emotional response at all? When Deckard explains the Voight-Kampff test to Rachael, he describes for her the typical physiological responses of a human to the questions; she anticipates his conclusion, “And these [responses] can’t be found in androids.” Deckard agrees: “They’re not engendered by the stimuli-questions; no” (44)—a straightforward denial of any emotional, autonomic response to the test from androids. Rachael has a (delayed) emotional reaction to most of the scenarios, yet Deckard calls her response “simulated” (47), suggesting conscious deceit. As we’ve seen, Deckard has stated categorically that only humans have empathy (29), and he says later in the novel that androids are specifically designed as lacking empathy (170). A servant performing manual labor presumably has no need for empathy. But the novel notes,
Under U.N. law each emigrant automatically received possession of an android subtype of his choice, and, by 2019, the variety of subtypes passed all understanding, in the manner of American automobiles of the 1960s. (16)
The proliferation of subtypes suggests that androids do more than simply manual labor, which would hardly require the intelligence of a Nexus-6. Roy Baty has reportedly worked as a pharmacist (169), and some colonists have taken an android as their mistress (133). In fact, realizing that another person is similar to oneself is not really difficult conceptually. If androids can feel and are intelligent enough for language, then we might well ask how could they not have empathy? Empathy, after all, does not always equate to compassionate action. Human empathy has an essential cognitive component. To some extent, the plot wears thin here. But in the world of the novel, the Voight-Kampff test does effectively identify an android, even androids, like Rachael, that do not know they are androids. The plot premise is that androids lack the full degree of human empathy by design or because of an inability to program empathy. But we are still faced with the apparent contradiction between Rachael’s responses to the test and Deckard’s unqualified assertions that androids lack empathy.
One could hypothesize that not only have the androids learned the correct response, they have internalized it, that is, made it an element of their identity. Perhaps they want to become human. The android Luba Lyft says later in the novel that since she arrived on Earth, “my life has consisted of imitating the human,” which she considers “a superior life-form” (124). The androids are the bioengineered mimesis of a human, and in addition, they consciously imitate humans. So they sincerely produce a human response on the Voight-Kampff test, but it is delayed because of some limitation in their physiology. Although Deckard believes that the androids do not have the typically human responses to Voight-Kampff test questions, he notes, “biologically they exist. Potentially” (44). Deckard’s comment does not clear up all the questions about the androids’ potential for empathy, but it does support the idea that they could learn to be genuinely empathetic. Just as some androids have developed independence, they have also apparently developed a degree of empathy—and the two could well go together. Rachael says later that she identifies with the android Pris, and we’ve seen that identification is the basis of empathy (173). The novel is not completely consistent on this question, but there is substantial evidence that Rachel and the escaped androids have in fact mimetically assimilated to the human community, by identification, just as children imitate those around them—in which case an android’s answers to the Voight-Kampff test are not really simulated.
One could object that there are places in the novel where an android is pointedly indifferent to the suffering of a living creature: the androids cut off the legs of a spider, completely oblivious to its suffering as well as the empathic suffering of the witness Isidore. While the spider is being dismembered, Roy and Irmgard go so far as to claim that empathy is just a “swindle” designed to justify discrimination against androids (193). Rachael kills Deckard’s goat by pushing it off the roof of his apartment building. These examples do not so much refute the assimilation thesis as suggest an incomplete assimilation. Moreover, one could point to examples of humans acting in exactly the same way, as when children pull the wings off a fly. In the case of Rachael killing Deckard’s goat, one could say that her desire for revenge is perhaps more typically human than the desire to prevent animal suffering. Revenge is the archetype of mimetic behavior, an eye for an eye. The capacity for revenge is the emotional flipside to the capacity for empathy. Humans have never conformed to ideal moral standards, whether in behavior or feelings, so why should we expect such from androids?
The Voight-Kampff test not only fails to account for an android’s ability to learn empathy, it also ignores that human empathy is largely a learned, cultural behavior, based on identification and shared attention—abilities that are distinctly human but also apparently possessed by the androids. Richard Viskovic notes that most contemporary readers would not find a calfskin wallet or a mounted stag head objectionable (174). These are taboos only because life has become so rare on earth, an exceptional circumstance in the history of the human species. The fact that the test involves the verbal presentation of a fictional situation also distances the response from pure instinct. Humans become inured to hearing about real tragedies in the news, and even more so fictional tragedies. To some extent, these are flaws in the plot of the novel. (Dick apparently subscribed to the Rousseauian notion that pity or empathy is a “natural” behavior.)
It’s possible that the problems with the test we’ve examined are meant to undermine the distinction between humans and androids. But Deckard is completely convinced, by experience, that androids cannot pass the Voight-Kampff test, and he is our authority on androids. Deckard changes his mind about androids during the course of the novel, but he doesn’t change his mind about the validity of the Voight-Kampff test for identifying androids and the capacity for empathy. When he wants to know if he feels empathy for androids, he relies on the Voight-Kampff apparatus. All in all, Dick wants to maintain the ambiguity surrounding androids, and the paradoxes of mimesis work well in this regard. One meaning of “imitation” is “fake,” but the reality of empathy, as we’ve seen, is rooted in imitation.
The Rosens initially explain Rachael’s inability to pass the Voight-Kampff test as a result of her being raised on a spaceship (50). They suggest that she hasn’t learned the correct responses because she was out of contact with the current situation on earth (although after Deckard proves that she is an android, Eldon admits that her childhood memories are implants and that she doesn’t know she is an android because “We programmed her completely” ). Deckard thereupon gives her the test again using the briefcase supposedly covered with “babyhide”—the idea being that while she may not be familiar with the current taboos surrounding animals on earth, the imperative to protect a baby goes beyond culture to instinct. Because Rachael supposedly grew up on a spaceship and ignorant of the current situation on earth, she might not react appropriately to a contingent historical taboo about a mounted stag head, for example. But any normal human, anytime or anywhere, shares an instinctive horror of killing a baby. So Deckard actually acknowledges that the taboos on calfskin wallets and so on are cultural not instinctual, which throws the whole premise of his testing out the window.
Is Deckard right to assume that the reaction to the babyhide briefcase would be impossible to feign? Yes and no. It’s true that humans, like most species, have an instinct to protect their young. And in the novel, there is a visual stimulus, a physical briefcase, which would increase the likelihood of a reaction. But the claim that it is babyhide is presented verbally, and language is by definition culture; and Rachael would have good cause to question Deckard’s claim, since every situation so far has been fictional, and Deckard would not likely own a babyhide briefcase. So the briefcase example still falls into the realm of culture and thus fails to validate the Voight-Kampff test.
Rachael’s character in this scene also works to undercut the idea that she is in any way qualitatively different from humans. She is represented as quite passionate and emotional (as are all the androids) throughout the entire encounter with Deckard. First, she is offended that Deckard, a lowly police officer, has such power over their huge corporation. She thinks she is human because she has been given false memories by the Rosen Association, as Eldon admits. Later in the novel, however, she claims to have been friends with the escaped androids for two years (which would predate the testing episode) and working on their behalf knowing that she is an android (182). But the testing episode at the Rosen corporation gives no hint that she knows she is an android. Just the opposite. She is “pale,” and “nodded fixedly” (56) when Eldon confirms that she is an android (which she had already suspected from the test). We can presume that Dick wrote the scene with the idea that she doesn’t know she is an android until Deckard and Eldon reveal it to her. When he wrote the scene late in the novel I referred to above, Dick found it effective to have her working on the side of the androids, since it works as an important plot reversal. If he thought about the earlier episode, he could safely assume readers would not be bothered. In the scene at the Rosen Association, she is characterized as emotionally human. She is outraged when Deckard says she is an android after the first round of testing, because she would have been executed by the police if she had been stopped and tested at one of the numerous police checkpoints throughout the city. And she is mortified to learn that she is an android, as I mentioned above. She “flinched” when Eldon put “his hand comfortingly on her shoulder,” reassuring her that Deckard will not kill her (57). There is no hint that she is feigning her responses. And of course later in the novel she falls in love with Deckard when sleeping with him (178).
We must also consider that Rachael is presented and considers herself as a member of the Rosen Association in this episode. After the Rosens claim to have invalidated the Voight-Kampff test, Deckard thinks,
He could not make out, even now, how the Rosen Association had managed to snare him, and so easily. Experts, he realized. A mammoth corporation like this—it embodies too much experience. It possesses in fact a sort of group mind. And Eldon and Rachael Rosen consisted of spokesmen for that corporate entity. His mistake, evidently, had been in viewing them as individuals. It was a mistake he would not make again. (52)
Later, after he realizes that Eldon falsely claimed that Rachael is human and tried to blackmail him, Deckard muses,
So that’s how the largest manufacturer of androids operates, Rick said to himself. Devious, and in a manner he had never encountered before. A weird and convoluted new personality type; no wonder law enforcement agencies were having trouble with the Nexus-6. (57)
His view of the Rosen Association as possessing “a sort of group mind” is suggestive because the practitioners of Mercerism, while in fusion, also have a sort of group mind. Of course, fusion in Mercerism is based on sharing joys and sorrows, and working as a group toward the goal of redemption, while the group mind of the Rosen Association is “[d]evious . . . A weird and convoluted new personality type.” The common denominator of a “group mind,” however, is significant because it reveals the ambivalence of mimesis as identification.
Deckard thinks that it was a mistake to assume that his encounters with the Rosen Association would follow the norms governing interactions between individuals. In Deckard’s view, the Rosens find themselves, like cogs in a machine, acting only for the impersonal interests of the corporation. On one level, this exemplifies Dick’s anti-institutionalism. His fiction typically represents corporations and governments as sinister entities that infringe upon human freedom while evading any moral responsibility.
The escaped androids are actually in cahoots with the Rosen Association, and the novel suggests that they are united in purpose. Rather strangely, the Rosen Association seems to be working to save escaped androids (“protecting its products” as Deckard puts it ). Why else would they plot so deviously to undermine the Voight-Kampff test? As Rachael observes, it is in the business interests of the Association to validate the test (49). Once an android rejects its programming, it has no economic value for the Association; just the opposite, it becomes a liability. Later in the novel, we learn that Rachael has been working consistently to neutralize bounty hunters by sleeping with them, teaching them empathy for androids. One might think that she is working independently of the Rosen Association, but she says, “The association . . . wanted to reach the bounty hunters here and in the Soviet Union. This [her sleeping with the bounty hunters] seemed to work” (183). Rachael also notes, “We tried to stop you [killing androids] this morning” (182) by invalidating the Voight-Kampff test. Deckard says later that Rachael is a prototype “used by the manufacturer to protect the others” (203). We can say that Rachael is, to some degree, assimilated to this “new [corporate] personality type.” Rachael is the “other” for Deckard because of her connection to the Rosen Association and because she is technically an android (although in practice a human). Her activities for the androids (and against the work of bounty hunters) are motivated by her membership in the Rosen Association and her identity as an android.
Michel Montaigne’s essay “On the Cannibals” is the classic work on the cultural “other.” The New World cannibals were feared as radically different from and hostile to the Europeans, but Montaigne shows that in some ways they were actually more civilized. Is it more barbaric to torture your enemy to death, or simply to kill and then eat him? Montaigne suggests that cannibalism, for both victim and sacrificer, was a ritual of revenge and honor, aristocratic values (239). When the “other” is demystified, they are revealed as a secret sharer, reflecting doubts and fears about ourselves: in the case of the cannibals, our own barbarity. Girard’s work on mimetic rivalry confirms Montaigne’s point. The mimetic double or other is threatening not because of their difference but their similarity, which makes them rivals, generating desire and creating conflict (Girard, TDBB 96). In Do Androids Dream, the androids’ status as doubles of the humans is uniquely motivated. Philip K. Dick’s novel provides a remarkable confirmation of Girard’s and Montaigne’s work. The fear of the androids expresses the fear that we too are merely “imitation machines” (as Tomasello puts it, using scare quotes), that empathy is a simulation and justice revenge. Our actions can be seen as mechanical, animated by mimetic tendencies that resist conscious control. Deckard demonizes the Rosens as “devious . . . A weird and convoluted new [corporate] personality type,” yet he finds himself driven by his job to “violate his own identity,” as he puts it later in the novel (164).
Supposedly Rachael lacks empathy, even though her responses fail the test on what is a mere technicality, a fraction of a second delay. As I noted above, she does in fact have an autonomic emotional response to the situations presented, even though they are fictional and only verbally presented very briefly. Deckard and Eldon in this scene are considerably more unfeeling than she is. Deckard compares, in his mind, an android with his electric sheep (40). He starts to think of Rachael as “it” rather than a person, despite her completely human behavior. Eldon uses Rachael as a bait in his attempt to blackmail Deckard. Both Deckard and Eldon are unconcerned with revealing to Rachael that she is an android, although we can imagine how devastating it would be for a person capable of believing themselves a human to learn that they are in fact not human.
The fact that we are given consequential choices gives empathy a deservedly high moral value, but androids have choice too. They often act unpredictably, and there is no hint in the novel that they lack free will.
Many readers have noticed that the novel undermines the distinction between androids and humans—but without recognizing the underlying basis for their similarity, which is the process of mimesis, the defining characteristic of both humans and androids.
Several critics have approached the androids in the novel as reflecting anxieties surrounding modern technology. Klaus Benesch, for example, argues that “cybernetic representations in modern art . . . [have] come to function as the cultural Other of technological society” (381). Benesch emphasizes that cyborgs represent cultural fantasies about technology rather than its practical functions. Nevertheless, the fantastical representation of cyborgs, for Benesch, is the outcome of the increasing power of technology in our lives. They are a reflection of our “posthuman” identity as “fabricated hybrids of machine and organism” (Donna Haraway, qtd. by Benesch 386). Jill Galvan, along the same lines, claims that the novel represents the progression of Deckard’s reluctant recognition that there is no stable boundary between humans and machines, that the destiny of humans is “to relinquish a self that has outgrown traditional human bounds—to be subsumed, in other words, into the posthuman collective” (428). The work of these critics suffers from a wholly inadequate understanding of mimesis as a constitutive element in human social identity—making possible our notable freedom and self-consciousness, as well as a constant temptation to destructive rivalry—the androids are emblematic of both sides of mimesis. The modern progress of technology may have been the historical occasion for Dick’s novel, but Do Androids Dream is not fundamentally about our relationship with technology but rather our relationship with ourself and each other, as informed by mimesis in all its senses.
Jennifer Rhee, in her analysis of the novel, draws on Mashiro Mori’s theory of the “uncanny valley,” whereby robots that approach too closely to human appearance appear uncanny. Rhee affirms that the androids in Dick’s novel are uncanny, as Mori’s theory would predict, and she also claims that the “uncanny confusions” between android and human serve to destabilize conventional definitions of the human and reveal our “deep inhumanity” (327), specifically the treatment of marginalized populations. Many critics have seen the androids as representative of oppressed human groups (or animals, in the case of Vint). By deconstructing the prejudice against androids, the novel certainly allows such a reading. But what all these scholars miss is the central and unique role of mimesis in the representation of the androids. The androids represent fears and hopes about our own ambivalent origin and nature; the application to oppressed groups is tangential. The novel’s androids are emblematic of the threat and promise of mimesis in human relationships and society.
- Isidore, Mercer, and Androids
The episodes involving J.R. Isidore serve to explore further the ambivalence of mimesis. He is the most empathetic person in the novel and the most devoted follower of Mercerism, a religion of empathy. As such, Isidore highlights the power but also limits of empathy. For one thing, he fails to distinguish between a real and an electric cat in his job. He feels empathy even for electric animals, as the repairman Milt Borogrove observes (73). In his job as an “ambulance” driver for broken electric animals, he accidently encounters a live cat dying, but he assumes it is electric and tries to find the electrical connections. His employer, Hannibal Sloat, sees this as demonstrating not only his stupidity but also, in a sense, a failure of empathy, in not distinguishing true suffering from simulated suffering (electric animals are programmed to simulate disease when they break down). When empathy is not tempered with intelligence, it can be misdirected to inappropriate objects. One could argue that the value of empathy depends upon its ability to discriminate. What good is empathy when directed to a machine without feelings? The indiscriminate nature of Isidore’s empathy illustrates the mechanical nature of mimesis (as identification) when not supplemented by reason.
The same issues are raised by Isidore’s interactions with the androids Pris, Irmgard, and Roy. Isidore lives in a deserted apartment building in the suburbs, and one day he hears noises from another apartment indicating that he has a neighbor in the building. He knocks on the door of the apartment, and we get this rather remarkable description of the inhabitant, Pris:
The door, meagerly, opened and he saw within the apartment a fragmented and misaligned shrinking figure, a girl who cringed and slunk away and yet held onto the door, as if for physical support. Fear made her seem ill; it distorted her body lines, made her appear as if someone had broken her and then, with malice, patched her together badly. Her eyes, enormous, glazed over fixedly as she attempted to smile. (59)
On the one hand, she is described as doll-like, “fragmented and misaligned,” “broken,” evoking the idea of androids as merely puppets. Pris stands in stark contrast with the confident and efficient Rachael, who is her double in appearance and programming. Although Pris is a Nexus-6, the overwhelming initial impression of her is of a childlike waif. Despite being a literal double of Rachael, she is radically different because of her situation. She is sensitive, unlike a robot. So although she is described in mechanical terms as “patched . . . together badly,” the effect of the description reinforces her humanity. Pris is like a human in her psychological fragility, her dependence upon circumstance and community. If she were simply a machine, presumably she would be more impervious to circumstance.
As this scene develops, Pris gains in confidence as she realizes that Isidore is alone and mentally defective, posing no threat to her. Her capacity for rapid psychological change is another example of her humanity. She expresses contempt for Isidore when she discovers that he is a “special” and treats him coldly:
Now that her initial fear had diminished, something else had begun to emerge from her. Something more strange. And, he thought, deplorable. A coldness. Like, he thought, a breath from the vacuum between inhabited worlds, in fact from nowhere: it was not what she did or said but what she did not do and say. (63)
The “coldness” is partly her attitude towards Isidore as a “chickenhead,” but it is also something that we find in other androids in the novel at various points, so it can be taken as a typical characteristic. The androids are “deplorable” because they represent a hubristic attempt to play god, to upset the natural order, to control nature through technology. Philip K. Dick apparently found the literal fabrication of a human simulacrum horrific. In an interview, he said he conceived of the androids in his novel as “deplorable,” lacking empathy (Sammon 27). At the same time, the androids as represented are practically human, despite their “deplorable” origin in the Rosen Association. Their duality is ultimately symptomatic of humans, who are capable of love and empathy but who can also act with mechanical self-interest. Living creatures, on one level, are biological machines, programmed by their genes to survive and reproduce at any cost. The androids reflect the fear that this is all we are. They represent our fears about ourselves and other humans, who can act with utter impersonality.
At first Isidore doesn’t realize that his neighbor Pris is an android, although she treats him without any consideration of his feelings—which complicates his sympathy for the androids, since in general they treat him with very little empathy or concern. Irmgard is the only one who appreciates him at all. What value is empathy when it is not reciprocated? Later, when he figures out that they are androids, he doesn’t especially care and continues to value them as friends, despite knowing that they are using him for selfish purposes (188).
In his relationship with the androids, Isidore is a Christ-like figure, in the sense of Dostoevsky’s novel The Idiot. On one level, Do Androids Dream is unambiguously Christian, albeit a tragic Christianity without any assurance of salvation. Mercer tells Deckard in fusion, “There is no salvation,” but we don’t know exactly what he means by “salvation” (164). We do know that Mercer saves Deckard’s life by warning him that Pris is approaching with a laser tube to kill him (203). The basic doctrine of Mercerism, recounted by Isidore, is, “Wilbur Mercer is always renewed. He’s eternal. At the top of the hill he’s struck down; he sinks into the tomb world but then he rises inevitably. And us with him. So we’re eternal, too” (71). Mercer tells Deckard that he exists “To show you . . . that you aren’t alone. I am here with you and always will be” (164.) Humans have the continued presence and nonjudgmental devotion of Mercer/Christ.
Mercerist fusion connects one to something larger than oneself, the community of people sharing the same faith. Interestingly, Roy Baty thinks that androids should be able to connect in this way to other androids, and reportedly he and his followers have tried using drugs to share such an experience, although with what results are not described (169). In any case, his followers regard him as a great spiritual leader (182). Their religious strivings are another example of their humanity.
The androids are uniformly hostile to Mercer. The popular TV and radio show host Buster Friendly (whom the androids disclose is one of their own, unbeknownst to bounty hunters ), often makes snide remarks about Mercerism. Isidore can’t understand why Buster Friendly is opposed to Mercerism, given its doctrines and the government’s approval. Despite his mental degradation, Isidore shows some creativity in conjecturing that the reason for Buster’s hostility is envy. Isidore concludes that Buster and Mercer are “in competition . . . for control of our psychic selves” (70). When he shares his theory with his boss, Hannibal Sloat, they have an interesting exchange:
Isidore said, “I think Buster Friendly and Mercerism are fighting for control of our psychic souls.”
“If so,” Sloat said, examining the cat, “Buster is winning.”
“He’s winning now,” Isidore said, “but ultimately he’ll lose.”
Sloat lifted his head, peered at him. “Why?”
“Because Wilbur Mercer is always renewed. He’s eternal. At the top of the hill he’s struck down; he sinks into the tomb world but then he rises inevitably. And us with him. So we’re eternal, too.” He felt good, speaking so well; usually around Mr. Sloat he stammered.
Sloat said, “Buster is immortal, like Mercer. There’s no difference.”
“How can he be? He’s a man.”
“I don’t know,” Sloat said. “But it’s true. They’ve never admitted it, of course.”
“Is that how come Buster Friendly can do forty-six hours of show a day?” “That’s right,” Sloat said.
“What about Amanda Werner and those other women?”
“They’re immortal, too.”
“Are they a superior life form from another system?”
“I’ve never been able to determine that for sure,” Mr. Sloat said. . . . “As I have conclusively in the case of Wilbur Mercer,” he finished almost inaudibly. (71-2 )
Hannibal Sloat’s perspective here is a little obscure, but the implication is that he has deduced that Buster Friendly is an android. Sloat is not distinguished by his empathy for Isidore or anyone else, nor does he display any sympathy for Mercerism. He says, “There’s no difference” between Buster Friendly and Mercer so we can presume his comment that he has “conclusively” determined whether or not Mercer is a “superior life-form from another system” is his ironic way of suggesting that he thinks Mercer is a fraud, as the TV show later confirms (at least on one level). By equating Mercer and Buster Friendly, he suggests that they are both equally fraudulent. His comment that he’s “never been able to determine [Buster’s identity] for sure” is probably ironic, since it’s obvious in light of him appearing on TV and radio for “forty-six hours of show a day” with no repetition, according to Isidore, a devoted viewer/listener (69). (There are presumably two or more Buster Friendly androids, who are replaced when they wear out—one form of “immortality.”)
Since Mercer appears in person to Deckard (and perhaps Isidore) without any aid from an empathy box, it is safe to assume that Mercer actually is some kind of alien or supernatural life form dedicated to aiding humankind. Isidore repeats the claim of followers, “Mercer, he reflected, isn’t a human being; he evidently is an archetypal entity from the stars, superimposed on our culture by a cosmic template” (65).
We can regard the staging of Mercer’s ascent by an alcoholic two-bit actor as an accident of history irrelevant to the larger truth of Mercerism. Buster Friendly’s exposé only concerns the screen imagery of the empathy box; compare the sympathetic communion with Christ experienced by believers through medieval artistic representations of the Crucifixion (the model for Mercerist fusion); a host of contingencies may play a part in the creation of any artwork, even one of spiritual nobility. It’s true that in his conversation with Isidore, Mercer identifies himself with the “elderly retired bit player named Al Jarry” (197), but his identity is obviously not limited to Al Jarry, since he appears independently to Deckard and Isidore. Mercer apparently identifies with all humans and animals in their suffering. Mercer’s appearance as Al Jarry is a logical extension of the Christian doctrine of the Incarnation, whereby Christ was fully subject to all the frailties and contingencies of a human body. Christ appeared as Jesus, a carpenter from Nazareth in Galilee, and Mercer is manifested as Al Jarry, a bit player living in East Harmony, Indiana (191). Mercer says to Isidore, “I don’t judge, not even myself” (198). He admits freely, “I am a fraud,” but with the important qualification, only “From their standpoint” (197):
Mercer smiled. “It was true. They did a good job and from their [the androids’] standpoint Buster Friendly’s disclosure was convincing. They will have trouble understanding why nothing has changed. Because you’re still here and I’m still here.” Mercer indicated with a sweep of his hand the barren, rising hillside, the familiar place. “I lifted you from the tomb world just now and I will continue to lift you until you lose interest and want to quit. But you will have to stop searching for me because I will never stop searching for you.” (198)
The androids’ objections to Mercerism are twofold. Buster Friendly speculates that Mercer is the invention of some “would-be Hitler” that wants to control people (193). Mercer is a myth designed to enable fusion for precisely that purpose. Irmgard, on the other hand, says that Mercer functions to propagate the lie that empathy is what distinguishes humans. She seems to believe that empathy doesn’t exist, it’s just something invented to prove the superiority of humans. The escaped androids apparently believe they don’t have empathy, despite their love for and efforts to help each other. Irmgard comments, after Buster Friendly’s exposé,
Isn’t it [empathy] a way of proving that humans can do something we can’t do? Because without the Mercer experience we just have your word that you feel this empathy business, this shared, group thing. How’s the spider? (193, emphasis in original)
In light of the novel’s sympathetic portrayal of Mercerism, it’s rather strange that Galvan endorses Buster Friendly’s “heavy-handed” insinuation that
Mercerism and the ideology of empathy that is its mainstay, far from appealing to innate human characteristics, function merely as a means by which the government controls an otherwise unwieldy populace. (Galvan 416)
But the novel actually defends “the ideology of empathy” very effectively, and there is no evidence that Mercer represents anyone but himself. Isidore finds a live spider in the hallway of his building, an extremely rare experience on earth. He shows the spider to the androids, who are curious. They wonder why it has so many legs, and Pris says, “You know what I think, J. R.? I think it doesn’t need all those legs,” whereupon she proceeds methodically to clip the legs off the spider one by one. In response, “A weird terror struck at J. R. Isidore” (190). Pris comments earlier in the novel on the irony that “All life. Everything organic that wriggles or squirms or burrows or flies or swarms or lays eggs” is protected by law, while any android on earth must be hunted and killed (148). Roy notes that, for humans, “Insects . . . are especially sacrosanct” (149).
In this remarkable passage, Dick intertwines four different yet closely related narrative strands. On TV, Buster Friendly reveals the origin of the empathy box screen video in a television studio using a small-time, bit-part actor to portray Mercer. Friendly and his guests celebrate the exposure of Mercer as a fraud. Roy, Irmgard, and Pris, meanwhile, rejoice in Buster’s revelation that “Mercerism is a swindle” and “The whole experience of empathy is a swindle” (193). At the same time, Pris is clipping the legs off the spider, in response to which Isidore is rapidly sinking into panic, culminating in the disintegration of reality and a descent into the “tomb world.” (In case the reader thinks that Isidore’s empathy is misplaced, Dick reaffirms the sanctity of insect life when Deckard is envious of Isidore for finding a spider: “I’ve never found a live, wild animal. It must be a fantastic experience to look down and see something living scuttling along. Maybe it’ll happen someday to me like it did to him” .) The androids are insensible to the pathos of the spider’s fate and Isidore’s total identification with the spider’s suffering:
“Please,” Isidore said.
Pris glanced up inquiringly. “Is it worth something?”
“Don’t mutilate it,” he said wheezingly. Imploringly.
With the scissors, Pris snipped off one of the spider’s legs. (190)
After Pris snips off four legs of the spider, she comments,
“He won’t go. But he can.”
[ . . . ]
“It won’t try to walk,” Irmgard said.
“I can make it walk.” Roy Baty got out a book of matches, lit a match; he held it near the spider, closer and closer, until at last it crept feebly away. (193-4)
Not only are they unconcerned about the spider and Isidore’s misery, they don’t even comprehend it. They can only conjecture that Isidore is upset about the monetary value of the spider, and they reassure him that they will recompense him. Eventually Pris discerns the real source of Isidore’s suffering but says blandly, “he’ll get over it” (195). This passage is a literary tour de force, inspired perhaps by Flaubert’s ironic juxtaposition of Madame Bovary’s seduction with the events of Yonville’s agricultural fair. The effect is to contrast the indubitable suffering of Isidore and the spider with the seeming victory of Buster and the androids over the “myth” of Mercer and empathy. Empathic suffering is genuine. Buster and the androids’ cynical efforts to deny it are the real fraud.
Dick underlines the power of Isidore and the spider’s shared suffering when the androids too begin to experience the disintegration of the apartment walls and furnishings, despite their lack of empathy for the spider. At first the reader assumes that Isidore’s experience is purely subjective, but then Irmgard says, “What’s he doing? . . . He’s breaking everything! Isidore, stop—” (196). The dissolution of the apartment, however, is only temporary. Mercer arrives in the “tomb world” of Isidore’s experience, speaks to Isidore, and resurrects the spider, although the androids are unable to see Mercer (197-8).
At least one critic claims that Isidore goes “berserk” and is physically destroying the apartment himself in this passage. There are several facts that controvert this interpretation. First of all, Isidore responds plainly to Irmgard, “I’m not doing it” (196), and the narrator never mentions Isidore actively breaking anything. Isidore may be mentally defective, but he is not insane, and he does not perceive himself destroying anything. Isidore is never portrayed as actually or potentially violent. The dissolution of the furniture, walls, and floor of the apartment, the emergence of dead animals and “mummified hands” etc. are the effect of the spider’s mutilation and Isidore’s Mercerist descent into the tomb world. Second, we know that Irmgard is sensitive to Isidore’s distress because earlier in this scene she says to Pris and her husband, “It makes me terribly upset, him just standing there by the sink and not speaking” (194) while the spider is dismembered. The disintegration of the apartment is physically centered on Isidore, as everything around him is devolving into the tomb world, and Irmgard recognizes the connection to Isidore, hence her comment. Her sensitivity enables Irmgard to recognize correctly that Isidore’s personal experience has somehow expanded out into the physical world of the apartment (at least partially), even if she has no clue how and why. Third, and most important, a narrative technique that Dick commonly uses in this and other novels is the oscillation between the psychological and the ontological realms; for example, in Do Androids Dream, users of the empathy box while in fusion suffer physical wounds from the rocks thrown by the killers, even though fusion is a purely mental experience. The androids’ experience of the dissolution of the apartment is another example of such oscillation, serving to emphasize the power of Isidore and the spider’s experience.
Isidore views the opposition of Buster Friendly to Mercer as a battle for control of humans’ souls, which places this contest in a universal perspective. The novel is informed by a cosmic dualism of good and evil. Evil is sometimes expressed as entropy, reducing meaningful form into kipple. In Mercerism, evil is embodied by the anonymous killers—originally government agents—who oppose Mercer in his ascent from the tomb world. Sometimes Deckard finds himself aligned with the forces of evil, as when he plans to kill Luba Lyft, the talented opera singer (91). Manichean dualism is a recurrent theme in Dick’s novels. Dick apparently believed in a cosmic opposition of good and evil, despite the ambiguity of its expression among humans. Humans are inextricably entangled in evil. As Mercer says to Deckard when he is reluctant to kill the remaining escaped androids,
You will be required to do wrong no matter where you go. It is the basic condition of life, to be required to violate your own identity. At some time, every creature which lives must do so. It is the ultimate shadow, the defeat of creation; this is the curse at work, the curse that feeds on all life. Everywhere in the universe. (164-5)
The mingled yarn of good and evil reflects the intrinsic ambivalence of human mimesis: the source of empathy but also rivalry; mechanical imitation yet also the enabling condition of human freedom.
- The Uncanny as Mimetic Effect
As I mentioned above, the uncanny, according to Jentsch, can be produced by a reader’s “uncertainty . . . [about] whether a particular figure in the story is a human being or an automaton” (qtd. in Freud 227). I’ve argued that the androids reflect our fears that humans may be automatons, compelled by mimetic instincts or simply the biological drive to survive and reproduce. Do Androids Dream is not primarily a novel of the uncanny, but there are certain episodes that produce this effect, notably the interviews with Luba Lyft and Inspector Garland.
Identity is fluid and unstable in the novel. Mimesis works to undermine the distinction between self and other. Dick creates uncertainty about the identity of Luba Lyft, which in turn reflects uncertainty onto Deckard. When Deckard confronts her in her dressing room at the San Francisco Opera House, she says:
“Who are you?” Her tone held cold reserve — and that other cold, which he had encountered in so many androids. Always the same: great intellect, ability to accomplish much, but also this. He deplored it.
“I’m from the San Francisco Police Department,” he said.
“Oh?” The huge and intense eyes did not flicker, did not respond. “What are you here about?” Her tone, oddly, seemed gracious. (93)
Luba’s tone not only “held cold reserve” but also “that other cold” that distinguishes the androids as inhuman in Deckard’s eyes. Yet the tone of her very next sentence, “oddly, seemed gracious.” The representation of Luba and the androids is deliberately contradictory, reflecting the paradoxical nature of humans, and designed to create an uncertainty about identity that kindles an uncanny affect.
With uncanny insight or intuition, Luba turns all of Deckard’s concerns back upon him:
“Do you think I’m an android? Is that it?” Her voice had faded almost to extinction. “I’m not an android. I haven’t even been on Mars; I’ve never even seen an android!” Her elongated lashes shuddered involuntarily; he saw her trying to appear calm. “Do you have information that there’s an android in the cast? I’d be glad to help you, and if I were an android would I be glad to help you?”
“An android,” he said, “doesn’t care what happens to any other android. That’s one of the indications we look for.”
“Then,” Miss Lyft said, “you must be an android.”
That stopped him; he stared at her.
“Because,” she continued, “Your job is to kill them, isn’t it? You’re what they call — ” She tried to remember.
“A bounty hunter,” Rick said. “But I’m not an android.”
“This test you want to give me.” Her voice, now, had begun to return. “Have you taken it?”
“Yes.” He nodded. “A long, long time ago; when I first started with the department.”
“Maybe that’s a false memory. Don’t androids sometimes go around with false memories?”
Rick said, “My superiors know about the test. It’s mandatory.”
“Maybe there was once a human who looked like you, and somewhere along the line you killed him and took his place. And your superiors don’t know.” She smiled. As if inviting him to agree.
“Let’s get on with the test,” he said, getting out the sheets of questions.
“I’ll take the test,” Luba Lyft said, “if you’ll take it first.”
Again he stared at her, stopped in his tracks.
“Wouldn’t that be more fair?” she asked. “Then I could be sure of you. I don’t know; you seem so peculiar and hard and strange.” She shivered, then smiled again. Hopefully. (93-4)
The novel mentions several times that androids don’t care what happens to other androids, although there are many examples of androids taking great care for other androids. Rachael has dedicated her life to saving escaped androids. Pris’s face “dissolved in rapture” when she is reunited with her friends Roy and Irmgard (141). But when she finds out that fellow androids Polokov and Luba have been killed, “The joy which had appeared on Pris’s face at seeing her friends at once melted away” (143). Roy and Irmgard have married, presumably, to express their love for each other. And Roy certainly cares what happens to his wife, letting out “a cry of anguish” at her death (205). Although the androids reportedly are unable to experience fusion, their empathy for each other suggests otherwise. There aren’t any examples in the novel of an android being indifferent to the fate of a fellow android, although the humans believe this is typical. Humans don’t care about androids, yet they fault the androids for their supposed lack of empathy for each other.
Luba ingeniously suggests that perhaps Deckard is an android but doesn’t know, or that possibly he killed the real Deckard and is now impersonating him. When she observes, “you seem so peculiar and hard and strange,” she cannily turns the stereotype of the android back upon him. When she directly accuses him of being an android because he doesn’t care about androids, “That stopped him; he stared at her.” And when she suggests he take the test first, to assuage her doubts, “Again he stared at her, stopped in his tracks.” This is a decidedly uncanny encounter, as Deckard, along with the reader, must consider whether Deckard himself is an android with implanted memories.
Androids are supposedly distinguished by their lack of affect, but sometimes what we find is inappropriate affect. Luba’s affect in this scene is rather curious. She smiles at incongruous times, which is part of the uncanny effect of this scene. Why is she smiling and gracious, when she knows that Deckard is planning to kill her and that she must defend her life? Of course she is playing a role in this scene, pretending to be human. But there is also the sense that she is like a child who is not sure of the right attitude to take in an unfamiliar situation. Luba is apparently not in control of her emotions, when “Her voice faded almost to extinction. . . . Her elongated lashes shuddered involuntarily, and he saw her trying to appear calm” (93-4).
Later in the novel, after she has been revealed as an android, she says, rather surprisingly,
I really don’t like androids. Ever since I got here from Mars my life has consisted of imitating the human, doing what she would do, acting as if I had the thoughts and impulses a human would have. Imitating, as far as I’m concerned, a superior life form. (124)
While she affirms the superiority of humans, in her efforts to improve herself (going from slavery to opera singer) she actually shows herself better than the humans in the novel, who sink too willingly into despair. Deckard often accuses the androids of giving up too easily, but the humans on earth have essentially given up hope, while the escaped androids are actively working to improve their lives. Luba’s dislike for androids doesn’t mean betraying them but rather trying to be better than her origin, a very human aspiration.
The encounter with Luba is a turning point in Deckard’s attitude toward androids, because he loves opera and she is a great singer, comparable to the best singers performing or on record during Dick’s lifetime. Great opera singers, of course, are known for their ability to feel and communicate emotion in their song. So Dick chose her profession to contradict the stereotype of androids as lacking emotion. When Deckard hears her sing, “he perceived himself sub specie aeternitatis, the form-destroyer called forth by what he heard and saw here” (91-92).
After Luba calls the police, Officer Crams shows up and the uncanny affect kicks up a notch. Deckard calls his boss, who offers to help him by speaking to Crams. But when Crams goes to the vid-phone, the screen appears blank, and attempts to call Bryant again prove fruitless. The narrative is sending us contradictory clues here. Is Deckard living in some kind of alternate reality, as we find in many of Dick’s novels? Is he really a bounty hunter, or is the whole novel his psychotic fantasy? As Crams asks him, “What do you do, roam around killing people and telling yourself they’re androids?” (102).
“Maybe you’re an android,” Officer Crams said. “With a false memory, like they give them. Had you thought of that?” (103)
When Crams takes him to the San Francisco police station, they don’t go to the one that Deckard knows. Officer Crams tells him that the Hall of Justice building (police station) on Lombard Street, where Deckard usually reports, is “disintegrating; it’s a ruin. Nobody’s used that for years” (102). They soon arrive at the Mission Street Hall of Justice building:
the roof of which . . . jutted up in a series of baroque, ornamented spires; complicated and modern, the handsome structure struck Rick Deckard as attractive—except for one aspect. He had never seen it before. (104)
If Deckard has been living in San Francisco and working the streets, it seems unlikely that he has never noticed this renovated building before, which contains a fully staffed and functioning police station.
Inside the station, Crams reports that he found a body (Polokov) inside Deckard’s hovercar and that they are checking it out with a bone marrow analysis at the lab to see if it is an android (105). I want to discuss the representation of the androids’ bodies in death, which might seem like a digression but is actually relevant to the point I’m making about the uncanny ambiguity of the androids. Living androids, of course, are identical to humans, except for the results from the Voight-Kampff or Boneli tests, which are based on the android’s physiology and require expert application and interpretation. When an android is killed, the description of the body is rather strange. When Deckard shoots Polokov,
the .38 magnum slug struck the android in the head and its brain box burst. The Nexus-6 unit which operated it blew into pieces, a raging, mad wind which carried throughout the car. Bits of it, like the radioactive dust itself, whirled down on Rick. (86).
There is no mention of blood. And in fact, Deckard goes on with his job with the dead body in the car. Instead of a brain, the android has a “brain box . . . . The Nexus-6 unit which operated it.” Rather than blood and brain tissue, it explodes into “Bits” like “dust” that whirl around the car. Polokov’s body, in death, sounds more like a robot than a biological organism. Yet his body, even with its head split open, requires a bone marrow analysis to be identified as android. When Rachael says that she’s not alive, Deckard responds, “Legally you’re not. But really you are. Biologically. You’re not made out of transistorized circuits like a false animal; you’re an organic entity” (181). The androids “wear out” in about four years because of a problem with “cell replacement” (181). Rachael reports that the development of a Nexus-7 will involve “modifications of its zygote-bath DNS factors” (174), which is technobabble, but “zygote” is a biological term for a fertilized ovum. So on one level, the novel wants to insist that the androids are 100% biological organisms. But in the description of an android’s death, of which Polokov’s is typical, they seem to be robots. This ambiguity comes up in Isidore’s experience:
Isidore, then, had a momentary, strange hallucination; he saw briefly a frame of metal, a platform of pullies and circuits and batteries and turrets and gears — and then the slovenly shape of Roy Baty faded back into view. Isidore felt a laugh rise up inside him; he nervously choked it off. And felt bewildered. (146)
This is described as a “strange hallucination,” but the ambiguity is typical regarding the bodies of the androids at certain points. The narrative effectively creates “uncertainty . . . [about] whether a particular figure in the story is a human being or an automaton” (Jentsch, qtd. in Freud 227). But whereas the protagonist’s mistake in “The Sandman” is plainly an illusion (there’s no question that Olympia is really a doll, except in the fantasy of the protagonist), here Dick literalizes the allegory, as it were, creating an ontological ambiguity within the world of the novel and aesthetically for readers.
When Luba points a laser at Deckard, he is sure he is going to die; likewise, as he’s being taken to the police station by Crams he thinks to himself that he’s going to his death. But despite having ample opportunity and motivation for killing him, neither Luba, Crams, nor Garland do so. While this doesn’t make any sense in terms of plot, it works well to create doubts in readers’ minds about whether they are really androids or not.
At the police station, Deckard calls his wife, but a woman answers whom he has never seen before—was his whole previous life delusional somehow? Inspector Garland finds in Deckard’s briefcase a list of androids to retire, and Garland is next on the list, suggesting that Garland might be an android. Garland calls in their bounty hunter Phil Resch to check on whether Deckard is on Resch’s list of androids to retire. Garland mentions to Resch that they are running a bone marrow test on Polokov, and Resch says, “I don’t think it’s a good idea to run a bone marrow test on him” (108), suggesting that he thinks Polokov is an android, but raising the question of why they would not want to know this information. For both readers and characters, the motivations of the characters in this scene are extremely obscure. There’s no coherent sense of who is an android and who is not. Both Garland and Lyft accuse Resch of being an android, and his lack of empathy for androids seems to confirm this. But later in the novel he proves to be human, again calling into question the difference between androids and humans.
In his analysis of the uncanny, Freud argues that uncertainty about identity is not enough to explain the uncanny experience, and he invokes the return of the repressed (241). But uncertainty and the return of the repressed are not mutually exclusive explanations. In fact, they go together naturally. What has been repressed are fears about identity. Mimesis is central to what it means to be human, but that heritage is distinctly ambivalent. On the one hand, as Tomasello’s work suggests, mimetic identification and shared attention are the source of empathy, human identity and consciousness, allowing for the remarkable freedom and rationality of humans. But at the same time, imitation is by definition a mechanical process, and as Girard shows, in the form of rivalry it can lead to violence. The androids reflect fears about the dark side of mimesis; they are uncanny because they are doubles of ourselves.
There is an anthropological content to Deckard’s uncanny encounters. The similarities between androids and humans demonstrate that their (and our) status as conscious agents depends upon identification and shared attention, the scenic source of culture. Children must learn language (the distinguishing quality of humans), and androids share that ability. But we must immediately qualify this statement by noting that the novel, at the same time, insists upon the difference between androids and humans. The representation of the androids is contradictory; one could argue, incoherent, except that it functions to create uncertainties that are central to the novel’s aesthetic and meaning. The mimetic and ambivalent androids point us back to our mimetic and ambivalent origin. As an “imitation” animal, a talking ape, mechanical yet free, empathic yet cruel, it is humans who are uncanny; and the representation of the androids reflects this anthropological insight.
Aristotle. Poetics. Translated by Malcolm Heath. London: Penguin Books, 1996.
Bartlett, Andrew. Mad Scientist, Impossible Human: An Essay in Generative Anthropology. Aurora, CO: Davies Group, 2014.
Batson, C.D. “Folly Bridges.” Bridging Social Psychology. Ed. P.A.M. Lange. Mahwah, NJ: Erlbaum, 2006. 59-64.
Baudrillard, Jean. Simulacra and Simulation. Trans. Sheila Faria Glaser. Ann Arbor: University of Michigan Press, 1994.
Benesch, Klaus. “Technology, Art, and the Cybernetic Body: The Cyborg as Cultural Other in Fritz Lang’s Metropolis and Philip K. Dick’s Do Androids Dream of Electric Sheep?” Amerikastudien/American Studies 44.3 (1999): 379-392.
Butler, Andrew M. “Chapter 3: Philip K. Dick, Do Androids Dream of Electric Sheep?” The Popular and the Canonical: Debating Twentieth-Century Literature 1940–2000. Ed. David Johnson. The Open University–Routledge, 2005. 108-152.
Deacon, Terrence W. The Symbolic Species: The Co-evolution of Language and the Brain. New York: Norton, 1997.
Decety, Jean. “A Social Cognitive Neuroscience Model of Human Empathy.” Social Neuroscience: Integrating Biological and Psychological Explanations of Social Behavior. Eds. Eddie Harmon-Jones and Piotr Winkielman. New York: Guilford Press, 2007. 246-270.
Dick, Philip K. Do Androids Dream of Electric Sheep? 1968. Del Ray–Random House, 2017.
Dijksterhuis, Ap. “Why We Are Social Animals: The High Road to Imitation as Social Glue.” Perspectives on Imitation: From Neuroscience to Social Science Vol. 2. Eds. Susan Hurley and Nick Chater. Cambridge MA: MIT Press, 2005. 207-220.
Donald, Merlin. “Imitation and Mimesis.” Perspectives on Imitation: From Neuroscience to Social Science Vol. 2. Eds. Susan Hurley and Nick Chater. Cambridge MA: MIT Press, 2005. 283-300.
Fitting, Peter. “Reality as Ideological Construct: A Reading of Five Novels by Philip K. Dick.” Science Fiction Studies 10 (1983): 219:236.
Freud, Sigmund. “The ‘Uncanny.'” The Standard Edition of the Complete Works of Sigmund Freud. Vol. 17. Trans. Alix Strachey. New York: Vintage, 2001. 217–56.
Galvan, Jill. “Entering the Posthuman Collective in Philip K. Dick’s Do Androids Dream of Electric Sheep?” Science Fiction Studies 24 (1997): 413-429.
Gans, Eric. “Language Origin in History V: Rousseau’s Prelinguistic Pity.” Chronicles of Love and Resentment, No. 185 (Saturday, October 16th, 1999). http://anthropoetics.ucla.edu/views/vw185/ .
— — —. Originary Thinking: Elements of Generative Anthropology. Stanford: Stanford University Press, 1993.
— — —. Signs of Paradox: Irony, Resentment, and Other Mimetic Structures. Stanford: Stanford University Press, 1997.
Garrels, Scott R. “Imitation, Mirror Neurons, and Mimetic Desire: Convergence Between the Mimetic Theory of René Girard and Empirical Research on Imitation.” Contagion: Journal of Violence, Mimesis, and Culture 12-13 (2005-2006): 47-86.
Girard, René. The Girard Reader. Edited by James G. Williams. New York, Crossroad Publishing, 1996.
— — —. “To double business bound”: Essays on Literature, Mimesis, and Anthropology. Baltimore, MD: Johns Hopkins University Press, 1978.
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1990.
Hoffmann, E.T.A. “The Sandman.” Tales of Hoffmann. Trans. R. J. Hollingdale. London: Penguin, 1982: 85-126.
Huntington, John. “Philip K. Dick: Authenticity and Insincerity.” Science Fiction Studies 15 (1988): 152-160.
Montaigne, Michel de. “On the Cannibals.” The Complete Essays. Trans. M. A. Screech. London: Penguin, 1991. 228-241.
Müller, Markus. “Reconsidering the Fantastic: An Anthropological Approach.” Anthropoetics: The Journal of Generative Anthropology 2.2 (Fall/Winter 1996).
Palmer, Christopher. Philip K. Dick: Exhilaration and the Terror of the Postmodern. Liverpool: Liverpool University Press, 2003.
Pascal, Blaise. Pascal’s Pensées. Trans. Martin Turnell. New York: Perennial–Harper & Row, 1962.
Plato. Republic. Trans. G.M.A. Grube and C.D.C. Reeve. Indianapolis, IN: Hackett, 1992.
Rhee, Jennifer. “Beyond the Uncanny Valley: Masahiro Mori and Philip K. Dick’s Do Androids Dream of Electric Sheep?” Configurations 21.3 (Fall 2013): 301-329.
Robinson, Kim Stanley. The Novels of Philip K. Dick. Ann Arbor, MI: UMI Research Press, 1984.
Rosa, Jorge Martins. “A Misreading Gone Too Far? Baudrillard Meets Philip K. Dick.” Science Fiction Studies 35.1 (Mar. 2008): 60-71.
Sahlins, Marshall. “The Original Political Society.” Hau: Journal of Ethnographic Theory 7.2 (2017): 91–128.
Sammon, Paul. “The Making of Blade Runner.” Cinefantastique 12.5-6. (July/August, 1982): 20-47
Shaddox, Karl. “Is Nothing Sacred?: Testing for Human in Philip K. Dick’s Do Androids Dream of Electric Sheep?” Studies in American Culture 34.1 (Oct, 2011): 23-50.
Skweres, Artur. “Philip K. Dick: One Man’s Illusion Might Invade the Reality of Others.” Crossroads in Literature and Culture. Eds. Jacek Fabiszak, Ewa Urbaniak-Rybicka, Bartoswz Wolski. Heidelberg, Germany: Springer, 2013. 257-267.
Todorov, Tzvetan. Mikhail Bakhtin: The Dialogical Principle. Trans. Wlad Godzich. University of Minnesota Press, 1984.
Tomasello, Michael. Becoming Human: A Theory of Human Ontogeny. Cambridge, MA: Harvard University Press, 2019.
— — —. The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press, 1999.
— — —. A Natural History of Human Thinking. Cambridge, MA: Harvard University Press, 2014.
Townsend, Chris. “Nietzsche’s Horse.” BLARB: Blog//Los Angeles Review of Books (04/25/2017). https://blog.lareviewofbooks.org/essays/nietzsches-horse/.
Vint, Sherryl. Speciesism and Species Being in Do Androids Dream of Electric Sheep.” Mosaic: An Interdisciplinary Critical Journal 40.1 (March 2007): 111-126.
Viskovic, Richard. “The Rise and Fall of Wilbur Mercer.” Extrapolation 54, no. 2 (2013): 163-182.
Warrick, Patricia. Mind in Motion: The Fiction of Philip K. Dick. Carbondale: Southern Illinois UP, 1987.
 Jean Baudrillard, Simulacra and Simulation, trans. Sheila Faria Glaser (Ann Arbor: University of Michigan Press, 1994), pp. 1-42, 121-127. See also Jorge Martins Rosa, “A Misreading Gone Too Far? Baudrillard Meets Philip K. Dick,” Science Fiction Studies 35.1 (Mar. 2008), pp. 60-71. For a stimulating discussion of the problem of reality in Philip K. Dick’s fiction, see John Huntington, “Philip K. Dick: Authenticity and Insincerity,” Science Fiction Studies 15 (1988), pp. 152-160. For a political reading of the problem of reality, see Peter Fitting, “Reality as Ideological Construct: A Reading of Five Novels by Philip K. Dick,” Science Fiction Studies 10 (1983), pp. 219-236.
 Patricia Warrick comments, “in Dick’s use of the mechanical double to mirror man’s fragmentation as he adulates reason and ignores the intuitive self, he has made a major contribution to the literature of the Doppelgänger” in Mind in Motion: The Fiction of Philip K. Dick (Carbondale: Southern Illinois UP, 1987), p. 130. Ian F. Roberts further develops Warrick’s point in “Olympia’s Daughters: E. T. A. Hoffman and Philip K. Dick” in Science Fiction Studies 37, no. 1 (March 2010), pp. 150-153. Sherryl Vint claims that the novel’s androids challenge the humans with the possibility that “they already are android-like, so long as they define their subjectivity based on the logical, rational, calculating part of the human being” (112) in “Speciesism and Species Being in Do Androids Dream of Electric Sheep,” Mosaic: An Interdisciplinary Critical Journal 40.1 (March 2007), pp. 111-126. Evolutionary theory, however, makes clear that the still-popular idea that emotion and reason are necessarily opposed is actually a false dichotomy. In any case, the androids are not any more rational than the humans in the novel. I disagree with these critics that the androids reflect humans’ supposed adulation of reason. This essay presents a rather different reading of the androids as doubles in terms of mimesis.
 Plato, Republic, Book X, Trans. G.M.A. Grube and C.D.C. Reeve, (Indianapolis, IN: Hackett, 1992), pp. 264-292.
 On the mimetic basis of symbolic signs, see Eric Gans, Signs of Paradox: Irony, Resentment, and Other Mimetic Structures (Stanford: Stanford University Press, 1997), p. 21; and Terrence W. Deacon, The Symbolic Species: The Co-evolution of Language and the Brain, Chap. Three, “Symbols Aren’t Simple” (New York: Norton, 1997), pp. 69-101. Merlin Donald also argues that culture and language emerged from the mimetic abilities of our primate ancestors: “Imitation and Mimesis,” Perspectives on Imitation: From Neuroscience to Social Science Vol. 2, eds. Susan Hurley and Nick Chater (Cambridge MA: MIT Press, 2005), pp. 283-300.
 Social Psychologist Ap Dijksterhuis notes that humans tend to mimic each other unconsciously. Researchers have observed babies only a few hours old imitating the facial expressions of adults. Research on mirror neurons demonstrates that mimetic tendencies are rooted in our neural system. Dijksterhuis concludes, “We are wired to imitate” (209): “Why We Are Social Animals: The High Road to Imitation as Social Glue,” Perspectives on Imitation: From Neuroscience to Social Science Vol. 2, eds. Susan Hurley and Nick Chater (Cambridge MA: MIT Press, 2005), pp. 207-220. On the convergence of Girard’s mimetic theory and empirical research on mirror neurons and imitation, see Scott R. Garrels, “Imitation, Mirror Neurons, and Mimetic Desire: Convergence Between the Mimetic Theory of René Girard and Empirical Research on Imitation,” Contagion: Journal of Violence, Mimesis, and Culture 12-13 (2005-2006), pp. 47-86.
 Marshall Sahlins argues that, in aboriginal tribal societies all over the world, animism is not just a “religion” in the narrow sense but constitutive of tribal peoples’ experience of the world: “In the animist cosmos, animals and plants, beings and things may all appear as intentional subjects and persons, capable of will, intention, and agency. The primacy of physical causation is replaced by intentional causation and social agency” (Kaj Århem, qtd. in Sahlins 101): Marshall Sahlins, “The Original Political Society,” Hau: Journal of Ethnographic Theory 7.2 (2017), pp. 91–128.
 E.T.A. Hoffmann, “The Sandman,” Tales of Hoffmann, trans. R. J. Hollingdale (London: Penguin, 1982), pp. 85-126.
 For an introduction to Girard’s ideas, see René Girard, The Girard Reader, ed. James G. Williams (New York, Crossroad Publishing, 1996).
 On the originary hypothesis, see Eric Gans, Originary Thinking: Elements of Generative Anthropology (Stanford: Stanford University Press, 1993), pp. 1-44.
 From Marxism and the Philosophy of Language, qtd. by Tzvetan Todorov, Mikhail Bakhtin: The Dialogical Principle, trans. Wlad Godzich (University of Minnesota Press, 1984), p. 33.
 In a similar vein, Sherryl Vint writes, “Deckard does not know how to interact with animals as anything other than commodities” (121). But the only times when Deckard feels any real joy is when he is interacting with his goat or the toad that he finds (which he thinks is alive). His joy is unconnected with the price or supposed status conferred. He is concerned with the price of an animal only when he is looking to buy, because he is relatively poor and live animals are very expensive. There’s no warrant for concluding that they are only status commodities, for him or anyone else in the novel.
 When Deckard tests Luba Lyft later in the novel, he says he will show her some pictures and ask her several questions (93). But in the narrations of him testing Luba as well as Rachael, Resch, and himself there is no mention of him showing any pictures.
 Much later in the novel, Deckard calls the androids’ inability to experience fusion (and hence empathy) “a deliberately built-in defect” (170), which would seem to answer the question he asks himself here. Whether the androids lack empathy by design or by accident is not immediately relevant here. There are many plot inconsistencies in this novel. Minor plot gaps are not a serious problem for the science fiction genre, and the plot holes in Do Androids Dream are not such as to bother a casual reader. We know that Dick wrote his novels in a hurry. I assume that these inconsistencies did not bother the author when he was writing the novel and that perhaps he did not even notice them (initially) in his own story. Therefore, in most cases, we can safely ignore them and analyze individual passages in terms of the immediate surrounding context without trying to reconcile them with later passages that seem to contradict them.
 The novel is written in the third-person, but it’s a very close third-person, such that the narrator’s perspective is identical to Deckard’s, at least in the episodes that represent Deckard directly. In the quoted passage, we don’t find the narrator critiquing Deckard, but Deckard rather ironically critiquing himself—ironic because the critique is not taken very seriously. Deckard is aware that his moral justification for killing androids is weak, but he’s not particularly concerned at this point in the novel. See page 116 of the novel for Deckard’s comments on his conscience.
 See Markus Müller on how making metaphors literal can work to reveal the violence that representation functions to defer: “Reconsidering the Fantastic: An Anthropological Approach,” Anthropoetics: The Journal of Generative Anthropology 2.2 (Fall/Winter 1996).
 Artur Skweres traces some influences from literature and Dick’s personal life on this theme in the novels: “Philip K. Dick: One Man’s Illusion Might Invade the Reality of Others,” in Crossroads in Literature and Culture, eds. Jacek Fabiszak, Ewa Urbaniak-Rybicka, Bartoswz Wolski (Heidelberg, Germany: Springer, 2013), pp. 257-267. His account is interesting and helpful, but he is unaware of the crucial importance of the shared scene of attention.
 We don’t know what exactly the androids’ programming consists of (much less how they are programmed), but Rachael suggests they are programmed for obedience (42), and Deckard says they are designed to lack empathy (170). The existence of many “subtypes” of androids suggests that their programming includes facility for specialized tasks (17). In any case, the novel makes a clear distinction between androids with implanted memories (of which Rachael is the only example in the novel) and androids who do not have implanted memories.
 The novel makes explicit the idea that humans are determined by their genetic heritage in Deckard’s speculations about the evolutionary basis of empathy. Likewise, Deckard believes that androids tend to resign in the face of death because they lack the “two billion years of the pressure to live and evolve hagriding it” to which a “genuine organism” is subject (184). A related example is when Rachel insists on having sex with Deckard because “We androids can’t control our physical, sensual passions” (180).
 Gans defines the human as the species for whom the main danger to its existence is intraspecies conflict; our ability to cooperate is the necessary antidote to our tendency to conflict (Originary Thinking, 2). For Tomasello, the ability for shared attention evolves from the need to cooperate, but he doesn’t recognize the threat of conflict as the source of this need (A Natural History of Human Thinking, 4).
 Decety writes, “In humans and other mammals, an impulse to care for offspring is almost certainly genetically hardwired. It is far less clear that an impulse to care for siblings, more remote kin, and similar nonkin is genetically hardwired” in “A Social Cognitive Neuroscience Model of Human Empathy,” Social Neuroscience: Integrating Biological and Psychological Explanations of Social Behavior, eds. Eddie Harmon-Jones and Piotr Winkielman (New York: Guilford Press, 2007), p. 270; citing C.D. Batson, “Folly Bridges,” in Bridging Social Psychology, ed. P.A.M. Lange, (Mahwah, NJ: Erlbaum, 2006), p. 61.
 An anecdote that may be apocryphal and derived from an incident in Dostoevsky’s fiction. See Chris Townsend, “Nietzsche’s Horse” (04/25/2017), BLARB, Blog//Los Angeles Review of Books, https://blog.lareviewofbooks.org/essays/nietzsches-horse/. In any case, the story is still illustrative.
 See Jean Decety on this point, “A Social Cognitive Neuroscience Model of Human Empathy,” 246-270.
 Another example is when Roy and Pris suggest that they killed their masters in order to escape (151). But many people would consider this justified to escape enslavement. Who would refrain from killing a kidnapper, if such was the only way to free a family member? By the same token, Polokov’s attempts to kill Holden and then Deckard are acts of pre-emptive self-defense.
 On Rousseauian pity, see Eric Gans, “Language Origin in History V: Rousseau’s Prelinguistic Pity,” Chronicles of Love and Resentment, No. 185 (Saturday, October 16th, 1999), http://anthropoetics.ucla.edu/views/vw185/.
 At one point, Deckard tries to recuperate the value of the test by calling Rachael a “schizoid girl” (52), but that claim is a non-sequitur with no support anywhere in the novel.
 I won’t try to reconcile Deckard’s anti-institutionalism with the Rosen Association’s attempts to protect renegade androids, but it certainly complicates the familiar strain of anti-capitalist critique in Dick criticism.
 Girard’s first book on mimetic desire was published in English (Deceit, Desire, and the Novel, 1966) a few years before Dick’s Do Androids Dream, although there is no evidence that they had any knowledge of each other.
 Compare this to Christopher Palmer, who writes, “androids threaten humans by reducing them to the mechanical,” but “the threat is paradoxical. Humans, notably Deckard, are in danger of becoming mechanical in their efforts to prevent androids from becoming human”; in his Philip K. Dick: Exhilaration and the Terror of the Postmodern (Liverpool: Liverpool University Press, 2003), 225. Palmer addresses Do Androids Dream only in passing, and he views the novel in terms of postmodernism, so that humans are in danger of “becoming mechanical” (my emphasis) as a result of technological developments. Postmodernism is a relevant context, but this article takes a different approach by exploring the complex interactions of the “mechanical,” anthropological, and psychological senses of mimesis in the novel.
 The movie version of the novel, Blade Runner, expands on the drama of Rachael’s discovery to great effect.
 See Andrew Bartlett on this point regarding the replicants in Blade Runner (and other fictional creatures) in his Mad Scientist, Impossible Human: An Essay in Generative Anthropology (Aurora, CO: Davies Group, 2014).
 Dick said he was initially motivated by reading the diaries of Nazi prison guards, one of whom wrote, “We are kept awake at night by the cries of starving children” (qtd. in Sammon 23). He wondered how to explain a human being who could write such a sentence. For Dick’s remarks, see Paul Sammon’s article, “The Making of Blade Runner,” Cinefantastique 12.5-6 (July/August, 1982), pp. 20-47. As it happened, however, the representation of the androids in Do Androids Dream is more complex than his comments would suggest. He understood on some level that to represent the androids as completely lacking in humanity would be to make the same mistake as the Nazis regarding Jews.
 I am in agreement with Karl Shaddox on this point, who writes, “Isidore is a kind of holy idiot in the novel, a contrast in innocence and naiveté to the mercenary intelligence of the bounty hunters” (39): “Is Nothing Sacred?: Testing for Human in Philip K. Dick’s Do Androids Dream of Electric Sheep?” Studies in American Culture 34.1 (Oct, 2011), pp. 23-50.
 Andrew M. Butler briefly discusses Mercer as a Christ figure, pointing out some of the parallels: “Chapter 3: Philip K. Dick, Do Androids Dream of Electric Sheep?” in The Popular and the Canonical: Debating Twentieth-Century Literature 1940–2000, ed. David Johnson (The Open University–Routledge, 2005), p. 140. Richard Viskovic also finds some parallels between Mercer and Christ: “The Rise and Fall of Wilbur Mercer,” Extrapolation 54, no. 2 (2013), pp. 167-8.
 It’s possible that Hannibal Sloat, who is represented as a skilled technician, was involved in the filming of Mercer’s ascent in a TV studio with Al Jarry and/or the fabrication of empathy boxes.
 The supposed exposé of Mercer can be compared to the offended reaction of the people of Nazareth to Jesus’s display of “wisdom and mighty works”: “Is not this the carpenter’s son?” (Matthew 13:55, KJV).
 Irmgard’s description of empathy as “this shared, group thing,” which the “Mercer experience” serves to prove, seems directed to the experience of fusion with the empathy box, which indeed the novel tells us androids can’t experience. Fusion involves several people literally sharing feelings. But empathy in its usual sense (even in the novel) is a “shared, group thing” only imaginatively, not literally, and the “group” might only be two people. So it seems possible that the androids don’t understand the normal usage of the word “empathy,” which would explain why they think they don’t have it.
 For the claim that Isidore goes “berserk,” see N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1990), p. 174.
 I’m indebted to Jorge Martin Rosa on this point, who writes, “In Philip K. Dick’s novels, the explanation of the phenomenon oscillates between psychological and ontological grounds” (64).
 Isidore recounts that as a child Mercer loved animals, and he was born with a genetic mutation that enabled him to reverse time and bring dead animals back to life. But “Local law prohibited the time-reversal faculty by which the dead returned to life; they [law enforcement] had spelled it out to him during his sixteenth year.” Soon after, the “killers . . . bombarded his brain . . . with radioactive cobalt” to destroy his ability to reanimate the dead, sending him into the “tomb world” (23). The “killers,” then, were originally government agents enforcing the law, although they have a mythic, cosmic character during fusion.
 Cf. Kim Stanley Robinson, who writes, “Certainly the novel is contradictory in its depiction of the androids, and just as certainly this is deliberate on Dick’s part” (91). For Robinson, the contradictory representations reflect back onto the humans, and are ultimately directed to the question of what it means to be human. Robinson concludes that the novel succeeds in “unravelling our easy biological definition of humanity, and replacing it with a difficult spiritual and moral definition” (92). But the “moral definition” of the human Robinson finds remains without any anthropological grounding or insight into the role of mimesis. See Kim Stanley Robinson, The Novels of Philip K. Dick (Ann Arbor, MI: UMI Research Press, 1984).
 A related example is when Roy Baty “dourly” yet with “unexpected warmth” delivers the news of their fellow androids’ deaths to Pris, who is deeply distressed at the news (143). Baty’s incongruous enthusiasm could be considered as an example of the androids’ lack of concern for each other. But we should bear in mind that Baty is actively working to save his fellow androids; in addition, he seems to enjoy the challenge of the situation. Baty’s perverse response is precisely that, an instance of inappropriate (yet honest) affect, rather than an example of him being indifferent to the fate of his fellows.
 Deckard observes that the androids do in fact dream; they escape their masters and come to earth for “A better life, without servitude. Like Luba Lyft; singing Don Giovanni and Le Nozze instead of toiling across the face of a barren rock-strewn field” (169).