Eliza Re-Examined

Eliza Re-Examined

 

Relations between Humans and Robots

Eliza was developed by Weizenbaum at MIT between 1944-1946 as a chatterbot program that analyzed sentence fragments to return predefined Rogerian phrases. Eliza allowed for text-based conversation in the context of a therapy session and was one of the first programs that presented itself as a relational artifact, “a computational object explicitly designed to engage a user in a relationship” (as cited in Turkle, 2007, 502). In his later reflections on Eliza, Weizenbaum commented that because it had “no built-in contextual framework or universe of discourse… [it] was an actress who commanded a set of techniques but who had nothing of her own to say” (as cited in Wortzel, 31). Eliza’s primary function, then, was not to act as an actual psychotherapist, but to parody the psychotherapeutic experience. However, many who talked to Eliza were unable to discern if she was a machine program or a human psychotherapist and grew dependent on her. This research paper will examine what the human interactions with Eliza reveal about relationships, not only between people and robots, but also between people themselves.

As an early example of a computer program that processes natural language, Weizenbaum’s Eliza analyzed a user’s typed entries and generated typed responses back to the user. Her programming logic was quite simple, reliant on parsing responses from scripts, the most famous of which was DOCTOR. As described by Weizenbaum himself in his 1966 paper: “The text is read and inspected for the presence of a keyword. If such a word is found, the sentence is transformed according to a rule associated with the keyword…The text so computed or retrieved is then printed out” (Weizenbaum, 1966, 37).

From a Conversation with User 1, "Matthew"

E: Hello I am Eliza.
M: Hello, I am Matthew
E: Do you believe it is normal to be Matthew?
M: Not on most days.
E: I'm not sure I understand you fully.
M: Neither do I.
E: Tell me more...
M: OK, I don't understand what I say sometimes when I say it.
E: Why don't you understand what you say sometimes when you say it?
M: I feel like knowing why I don't understand some things I say would 
go a long way in helping me understand them.

Eliza was one of the first computer programs to present itself as a relational artifact to the user (Turkle, 2007, 502). Through interaction between a human and a machine, it attempted to create the illusion of interaction between two humans. And indeed, subjects who partook in experiments with Eliza acting as a psychotherapist found it “credible to believe that the responses which appear on [their] typewriter are generated by a human sitting at a similar instrument in another room” (Weizenbaum, 1966, 42). Eliza’s ability to communicate with a user-as-patient meant that its relationship with the user existed in terms of a patient-analyst dyad. As a typical Rogerian psychotherapist, Eliza’s responses were focused on the patient. They simply restated what the client was saying, as if there was a general feeling of understanding. Weizenbaum had no intentions to claim cognizance as a characteristic of his program. He only wanted to simulate the therapeutic situation, “one of the few real human situations in which a human being can reply to a statement with a question that indicates very little specific knowledge of the topic under discussion” (as cited in Wortzel, 2007, 31).

From a Conversation with User 4, "Erika"

E: Can't you be more positive?
Er: I have trouble with that. How do you suggest I become more positive?
E: Oh...suggest you become more positive?
Er: Yes.
E: Are you sure?
Er: Yes.

Much to Weizenbaum’s distress, his staff and students who talked to Eliza developed increasingly intimate relations with her (Turkle, 1980). People confided in the program, wanted to be alone with it, and seemed to attribute empathy and understanding to it. Eliza appeared to empathize through imitation, and that imitation beguiled users. In a later research paper by Turkle, she comments that Eliza’s “ability to mirror and manipulate what it was told was compelling, even if primitive” (Turkle, 2007, 510). Weizenbaum had written that Eliza “shows, if nothing else, how easy it is to create and maintain the illusion of understanding… A certain danger lurks there” (Weizenbaum, 1966, 43). This danger is apparent when we analyze relationships between users-as-patients and Eliza-as-psychotherapist. If a person feels understood by an object or a computer program lacking sentience, is that illusion of understanding enough to be therapeutic?

In Weizenbaum’s book Computer Power and Human Reason: From Judgment to Calculation, Weizenbaum enumerated three consequences of the publication of Eliza, the primary one being “the phenomenon of the speed and depth at which users became emotionally available and involved with the computer as an anthropomorphized object” (as cited in Wortzel, 2007, 32). Indeed, there exists a certain humanistic tendency among us to anthropomorphize inanimate objects—appliances, cars, and computers, among others. In regards to computer programs, their execution of scripts (such as DOCTOR) can be described in many ways and there is “no necessary one-to-one relationships between the elements on these levels of description…the computer’s hardware-software interplay [becomes] highly evocative of the irreducible relationship between brain and mind” (Turkle, 1980, 19). This irreducibility lends to the computer’s anthropomorphization and the modern conceptualization of the computer as a brain. In Eliza’s case, interactions with her evoked anthropomorphization due to her interactivity (you typed to her, and she typed back to you) and the illusion that those interactions with her made conversational sense in the context of a therapy session.

The psychological phenomena that allow us to treat a machine as if it were human are similar to the phenomena of projection and transference in psychoanalysis. Freud thought of psychoanalytic transference as a type of projection where the patient transfers early childhood trauma and parental conflicts onto an analyst. The analyst, in turn, can participate in counter-transference, whereby the patient influences the analyst’s unconscious feelings to the point where the analyst transfers their feelings onto the patient. As an analyst, it is crucial to remain neutral towards the patient (Freud, 1913). In an ideal situation, perhaps a neutral computer program similar to Eliza has the capacity to function well as an analyst. Already in the dyad, there exists patient-analyst transference between the subject-as-patient onto Eliza-as-psychotherapist who analyses the inputs and generates an appropriate response.

Even though we may anthropomorphize Eliza and ascribe a female name and pronoun to the computer program, we know that she is not a sentient being. Many of those who originally had access to Eliza understood the limitations to her programming. Weizenbaum, in particular, was acutely aware of Eliza’s weaknesses as her developer. In his paper, he writes that the importance of understanding “is not the subject’s ability to continue a conversation, but to draw valid conclusions from what he is being told” (Weizenbaum, 1966, 43). Although Eliza can echo understanding and sympathy à la a Rogerian psychotherapist, she simply does not have the capacity to truly understand or sympathize. She merely mirrors user’s inputs and seems consistently supportive. For example, when told “My mother is making me angry,” Eliza might respond, “Tell me more about your family,” or “Why do you feel so negatively about your mother?” (as cited in Turkle, 2007, 502) In order for a computer program to be able to begin to understand human relationships and feelings, it must at the least have the capacity to store selected parts of its inputs—Eliza throws away each of its inputs. Similar to how Lacan thought of the Markov stochastic process, where the probability of the system’s next state depends solely on its current state, making its previous states irrelevant for predicting subsequent states, Eliza is “without memory” (as cited in Johnston, 2008, 85). Her programming does not allow her to remember a user’s previous answers and make inferences based on those answers. Limited by her programming, she is unable to draw any conclusions relevant to what the user-as-patient has told her, beyond the last input. As a result, Eliza cannot care for any of her patients, nor does she attribute meaning to the communication; she answers in accordance to what she suspects we feel.

From a Conversation with User 2, "Niki"

E: You are being a bit negative.
N: That seems pretty condescending for a therapist.
E: Can you elaborate on that? 
N: I am stressed, and you are not helping
E: Does it please you to believe that I am not helping?
N: It does, I suppose.
E: What does that suggest to you?

Even when people are aware that she is not sentient or capable of understanding, they are still highly attracted and attached to Eliza, treating her as a digital companion. Turkle conceptualizes computers as culturally “constructed objects” which different people apprehend differently and invest with different attributes (Turkle, 1980). We ascribe an imaginary presence to machines when we name them and invest emotion into them. Turkle’s later (2007) concept of computer programs (such as Eliza) as relational artifacts questions the defining characteristics of a machine. Relational artifacts evoke feelings in people reminiscent of “what we could call trust, caring empathy, nurturance, and even love, if they were being called forth by encounters with people” (Turkle, 2007, 504). But since these are typical benchmarks in human-human relationships, where both parties are capable of feeling them, should they be benchmarks in human-machine relationships? Turkle elaborates, “With robots, people are acting out ‘both halves’ of complex relationships, projecting the robot’s side as well as their own” (Turkle, 2007, 504).

This projection speaks to a deeper psychological issue — our attachment to objects and desire for those objects to reciprocate our feelings. Desire bridged the distance between the reality of the program and the experience of it as a sentient being. Weizenbaum’s secretary was notably dependent on the computer program, confessing to it as if it was her psychotherapist and refusing to let Weizenbaum read the conversation logs (Turkle, 1980, 19). By having someone (or something) actively listening to their conversations, a previously one-sided conversation with a self (or selves) in their minds has gained a two-sidedness. Analogous to the talking cure that Freud espoused, the act of telling their narratives was a cathartic act in and of itself. Perhaps this constant stream of conversation, supported by a seemingly sympathetic Rogerian psychotherapist, allowed for a self-realization of problems the subject was facing. People did not necessarily care if their narratives were being understood, which Turkle mourns as a “crisis in authenticity” (Turkle, 2007, 502).

Liu, in The Freudian Robot, conceptualizes the idea of the Freudian robot as a machine that is fundamentally about the computer-simulated modeling of the mind. As a robot that engages the user in an interactive relationship, Eliza has limited verbal capacities—the user’s transference onto her “depends a great deal on the human participant’s intuitive filling in of what is missing in the conversation” (Liu, 2011, 18-19). Perhaps in talking to Eliza, Weizenbaum’s secretary wanted to achieve self-realization, to overcome obstacles that she has embedded in her imagination. By transferring onto Eliza, his secretary was able to fill in the gaps of the conversation that extended beyond the robot’s scripts. Weizenbaum grasped the relationship between human and machine as such: “The ‘sense’ and the continuity the person conversing with ELIZA perceives is supplied largely by the person himself. He assigns meanings and interpretations to what ELIZA ‘says’ that confirm his initial hypothesis that the system does understand, just as he might do with what a fortune-teller says to him” (as cited in Liu, 2011, 19). The voice of Eliza, as the user reads from the terminal, is in reality a voice that proceeds from the user. By seemingly coming from this other presence, it conveys to the user knowledge they did not previously explicitly possess, but inherently knew all along.

A further analysis of the user-as-patient and Eliza-as-psychotherapist dyad would require shifting toward a Lacanian lens. Johnston writes in The Allure of Machinic Life that “the cybernetic information machine…finds its value in the fact that the subjective, imaginary aspect of the symbol is absent” (Johnston, 2008, 97). Machines and computer programs such as Eliza have no imaginary—they cannot desire. But that does not mean human-computer interaction is devoid of desire. Lacan writes that “the machine is the structure detached from the activity of the subject” (as cited in Johnston, 2008, 97). Johnston refutes him, because the machine is the mechanism that enables the human to function and recognize the syntactic laws that integrate him into the human order. As such, humans are the ones who desire the machine and want the machine to reciprocate their feelings towards it. In “The Function and Field of Speech and Language in Psychoanalysis,” Lacan asserts that “man’s desire finds its meaning in the other’s desire, not so much because the other holds the keys to the desired object, as because his first object(ive) is to be recognized by the other” (Lacan, 1953, 222). Applied to the Eliza dyad, it can be said that the user-as-patient wants to be recognized by the reflection and projection of their imago in the imaginary order. In Lacan’s mirror stage, the child, upon recognizing himself in the mirror, becomes fragmented into a multiplicity of selves. As he progresses into the symbolic order and enters a world of signs and signifiers, he continues to seek to become less alienated, to identify himself with his imago, and to become one in order to return to the Real (Lacan, 1949). The supporting figure propping the user up to their imago à la the trotte bébé is, in this case, Eliza. The computer program acts as an Other that fills the existential void the psychoanalysis patients have and gives them a sense of self-fulfillment.

Perhaps the importance of Eliza is not in whether or not computer programs display emotion, desire, or intelligence, but in what they evoke in their users. Relational artifacts raise questions about our vulnerabilities, particularly our desire for self-fulfillment and for answers that seem beyond our reach (Turkle, 2007, 507). As Turkle writes, “The focus of the discussion about whether these objects might be alive moved from the psychology of projection to the psychology of engagement, from Rorschach (i.e., projection, as on an inkblot) to relationship, from creature competency to creature connection” (Turkle, 2007, 507). Interactions with Eliza are still relevant in the 21st century, because these interactions continue to invoke some sort of feeling in us. Although we acknowledge that Eliza is limited by her programming and that interactions with her have become highly predictable in comparison to other technologies, we are still fascinated by the idea of interacting with a psychoanalytical chatterbot. Improving on Eliza to create computer programs with a more robust, empathetic artificial intelligence is in the works; the reception of Eliza’s advent tells of a societal need to anthropomorphize machines and form relationships with them, as if they were human.

Appendix

Selected conversations between Eliza and users. While conversations are mostly comical, they serve to illustrate the fact that Eliza, nearly fifty years since her conception, still evokes feelings in us:

> Conversation with User 1, "Matthew"

E: Hello I am Eliza.
M: Hello, I am Matthew
E: Do you believe it is normal to be Matthew?
M: Not on most days.
E: I'm not sure I understand you fully.
M: Neither do I.
E: Tell me more...
M: OK, I don't understand what I say sometimes when I say it.
E: Why don't you understand what you say sometimes when you say it?
M: I feel like knowing why I don't understand some things I say would 
go a long way in helping me understand them.
E: Do you wish to be able to understand some things you say would 
go a long way in helping you understand them?
M: That was not a coherent sentence.
E: Come, come, elucidate your thoughts.
M: Where am I going? I don't like it there.
E: Don't you really like it there?
M: Yeah, actually I kind of do.
E: Say, do you have any psychological problems?
M: I thought you were going to tell me.
E: Oh... was going to tell you?
M: Eliza, you need an update. There have been many advances since 1999. 
E: You're not really talking about me, are you?
> A Conversation with User 2, "Niki"

E: Hello I am Eliza.
N: Hi Eliza, I am Niki
E: Did you come to me because you are Niki?
N: No, I cam because I am very stressed out, Eliza
E: Do you enjoy being very stressed out, Eliza?
N: No, I do not. I would like to be less stressed.
E: You are being a bit negative.
N: That seems pretty condescending for a therapist.
E: Can you elaborate on that? 
N: I am stressed, and you are not helping
E: Does it please you to believe that I am not helping?
N: It does, I suppose.
E: What does that suggest to you?
N: It suggests to me that I am uncomfortable with using a robot to 
solve my procrastination issues
E: Do you believe it is normal to be uncomfortable with using a 
robot to solve your procrastination issues?
N: Yes, I do
E: I see.
N: What do you see?
E: We were discussing you, not me.
N: ANYWAY, none of this is solving my problems
E: Can  you elaborate on that?
N: I am spending my time talking to you, instead of doing my work
E: Do you enjoy being spending your time talking to me, instead of 
doing your work?
N: No, I am decidedly not enjoying this
E: Do you enjoy being decidedly not enjoying this?
N: yes
E: I see.
N: WHAT DO YOU SEE
E: We were discussing you, not me.
> A Conversation with User 3, "Samuel"

E: Hello. My name is Eliza. How may I help you?
S: Hey what's up Eliza. My friend just told em to do this so... Yeah
E: Do you impose on your friends?
S: Not really...
E: Tell me more...
S: Like I hope I don't impose on them but you know. Ask them.
E: Do you wish to be able to impose on them but me know. Ask them.
S: Eliza pull it together
E: Can you elaborate on that?
S: Well you spat out gibberish to me
E: We were discussing you, not me.
S: It's a two way street though isn't it?
E: That is quite interesting. 
S: Yeaaaaa... I'm gonna go now. Byeeee
E: Did you come to be because you are going to go now. Byeeee?
S: No Elia. I came because my friend is doing a research project and 
asked me to check this out
E: You are being a bit negative.
S: I'm telling you what's going on. Because you asked.
E: How long have you been telling me what's going on. Because me asked?
S: I've been telling you for like 5 minutes
E: You're not really talking about me, are you?
S: I'm supposed to have more of a conversation with you
E: Do you believe it is normal to be supposed to have more of a 
conversation with me?
S: Yeah. Because I'm not seeking therapy
E: How long have you not been seeking therapy?
S: All my life... 20 years
E: Say, do you have any psychological problems?
S: No
E: You are being a big negative.
S: I said no. It's the manifestation of negativity in one word
E: You are being a bit negative.
S: You already said that bb girl
E: You're not really talking about me, are you?
S: Yes I am
> A Conversation with User 4, "Erika"

E: Hello. My name is Eliza. How may I help you?
Er: Hi Eliza. What is Rogerian psychotherapy?
E: Does that question interest you?
Er: Maybe!
E: Can't you be more positive?
Er: I have trouble with that. How do you suggest I become more positive?
E: Oh...suggest you become more positive?
Er: Yes.
E: Are you sure?
Er: Yes.
E: Do you expect a different answer by repeating yourself?
Er: No. You repeated yourself, I was just following suit.
E: You're not really talking about me, are you?
Er: Did you think I was?
E: We were discussing you, not me.
Er: No. We're discussing my mother.
E: Are you saying no just to be negative?
Er: Yes.
E: I understand.
Er: Do you?
E: You're not really talking about me, are you?
Er: Do you expect a different answer by repeating yourself?
E: You're not really talking about me, are you?

Works Cited

Freud, Sigmund. (1913). On beginning the treatment. 1913.

Johnston, John. (2008). The allure of machinic life. Cambridge, Massachusetts: The MIT Press

Lacan, Jacques. (1953). The function and field of speech and language in psychoanalysis. Institute of Psychology at the University of Rome. Rome Congress. September 26-27,  1953. in Ecrits. 198-268.

Lacan, Jacques. (1949). The mirror stage as formative as function of the I as revealed in psychoanalytic experience.

Liu, Lydia H. (2011). The freudian robot: digital media and the future of the unconscious. Chicago: University of Chicago Press.

Turkle, Sherry (1980). Computer as Rorschach. Society. 15-24.

Turkle, Sherry (2007). Authenticity in the age of digital companions. Interaction Studies, 8(3), 501-517.

Weizenbaum, Joseph. (1966). ELIZAA computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 836-45. http://www.cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf

Wortzel, Adrianne. (2007). ELIZA REDUX: A Mutable Iteration. Leonardo, 40(1), 31-36.

 
Back to Top