“Are you a good person?”

“Are you a good person?”

 

Robots, Humanity, and Morality in Ex Machina

Logic would indicate that if the same story were constantly recycled, audiences would get tired of experiencing the same tale over and over again. However, when it comes to stories exhibiting the Frankenstein complex, this is not the case. Literary scholar Lisa Zunshine stipulates this is because when we come across a character that walks the line between being human and being an artifact, our sense of their ontology is pulled in two different directions. The uncertainty viewers feel towards the character’s categorization is what is initially appealing about these stories. Even so, Zunshine states that the viewer eventually tries their best to resolve their confusion before the film ends, by placing the hybrid character as either a living being or an artifact (79). While it is true that we are drawn to these stories because they create cognitive uncertainty by playing with the ontological categories foundational to our thinking, it is not always the case that the audience is able to resolve the hybrid’s ontological ambiguity. A film that directly contests Zunshine’s stance by delivering a resolution where the main character’s hybridity is difficult to resolve is Alex Garland’s Ex Machina (2015). In this film, the humanoid robot Ava possesses unquestionable artificial intelligence, but because viewers tend to equate humanity with morality, and Ava commits some immoral actions, it is difficult for audiences to grant Ava her humanity. This occurs because of what Zunshine calls our tendency to essentialize other entities. She defines essentialism as “the way we perceive various entities in the world and not necessarily the way they really are” (6). In the case of Ex Machina, because viewers associate morality as an important part of a human’s essence, when an ambiguous character such as Ava displays a lack of morality, their ontological status is difficult to resolve.

The premise of Ex Machina is that Nathan—the creator of Ava—enlists the help of Caleb, a programmer for Nathan’s company, Bluebook. It is eventually revealed that Caleb is supposed to determine the validity of Ava’s artificial intelligence by conducting a Turing test. Interestingly, we see Caleb jump from a hyper-modern office to a secluded home surrounded by outdoor greenery. The juxtaposition of nature with the extreme technological advancements in Ex Machina forces audiences to think about the different ontological categories available to them. Zunshine states that the different ontological categories into which viewers tend to place entities are that of either a human being or an artifact. She defines an artifact as something “made to fulfill a certain function” (63). The problem that arises is that when an artifact’s function is to display artificial intelligence, as is the case with Ava, the artifact tends to straddle the line between displaying characteristics associated with that of a human being and that of an artifact. Zunshine deems these types of characters as being “counterontological” due to their resistance to categorization (67).

Although viewers of Ex Machina view Ava as a counterontological entity, they are reassured that their cognitive uncertainty will be resolved through the Turing test conducted on Ava. Essentially, the viewers can take a back seat in discovering Ava’s categorization because they believe Caleb will do it for them. Once Caleb begins the Turing test on Ava, he is shown tentatively walking into a glass box. A smashed area hints that perhaps other tests were conducted and turned violent. Ava first walks in to the sound of very non-threatening background music, with her inner robotic mechanisms on full display. Ava’s human face and robotic body externalizes the ambiguity that both Caleb and viewers experience at this point in the film. At first, Ava’s non-threatening demeanor causes viewers to be more accepting of her, but then later, when Caleb watches her on his screen as she instigates a power outage, she becomes more threatening; the lighting turns red, the music becomes more alarming, and the security system goes off. Moments like these in Ex Machina, where viewers become uncomfortable with Ava and her technological abilities, make it difficult to grant her the categorization of a human.

At the same time, viewers feel inclined to grant Ava her humanity because a lot of the time during the Turing tests, Ava appears to be more in control than Caleb (an actual human) is. In one of the early sessions, she even circles him in the glass box where he resides, which calls attention to how the balance of power in Ava and Caleb’s relationship continually shifts. In another session, Caleb talks openly about his life, including the death of his parents. Ava’s ability to establish this intimacy with Caleb causes a viewer to believe that like a human, Ava is perceptive of other people’s emotions. In another session, Ava reverses the roles completely; she throws questions at Caleb as though she is the one performing the Turing test on him. One of the questions Ava asks him is whether or not he likes Nathan, and the shot at this moment pans to a camera pointing at the two of them. Another scary power outage occurs and the camera points away. Ava grasps this opportunity to tell Caleb he shouldn’t trust Nathan, which pushes her towards the human ontological category not only because she appears to care for Caleb, but also because Ava shows she is capable of reaching her own opinions by using her analytical abilities. However, Ava’s ontological category is still ambiguous because her intelligence is constantly pitted against Caleb’s intelligence. Ava’s intelligence creates ontological uncertainty for viewers because her ability to interact intelligently with Caleb causes her to appear human, but at the same time since she is a humanoid robot, some of her capabilities are far superior to those of a human. A moment in the movie that sticks out in particular as an instance when Ava’s intelligence is superior to Caleb’s is when she acts as a lie detector. One of the questions Ava asks Caleb is “Are you a good person?” and when he answers yes, she knows he is telling the truth. This creates ontological ambiguity because not only is Ava taking on the computational function of a lie detector, but she is also personally concerned with his morality and is able to interpret Caleb’s sincerity thus causing her to appear human. At this point in the film it seems like Ava practices Zunshine’s essentialist thinking by considering morality to be one of the characteristics humans should possess, thus making her immoral actions towards the end of the film so surprising.

The character in Ex Machina who is hands-down the most immoral is Nathan, a human. Various times throughout Ex Machina, Ava calls attention to Nathan’s manipulation of Caleb in order convince Caleb that Nathan is a bad person. When told that he will be performing the Turing test on Ava, Caleb immediately realizes that this is not exactly like the original Turing test because he is explicitly told that Ava is a humanoid robot, but he fails to realize all the ways that Nathan is manipulating the test. In actuality, it is more like Nathan is conducting the Turing test on both Ava and Caleb by seeing whether Ava can outwit Caleb. In fact this eerie feeling of being watched is expressed from the moment Caleb first steps into Nathan’s living room, where the camera angle comes from above and Caleb looks up into the corner of the gray, modern-styled living room. In this moment Caleb meets the viewers gaze and it as though the audience is placed in the position of a surveillance camera. Upon first meeting Caleb, Nathan immediately feels the need to tell Caleb how he is supposed to be feeling, “You’re freaked out to be meeting me.” He then goes on to make Caleb sign a nondisclosure agreement. Nathan tells Caleb, “When you discover what you’ve missed out on in a year, you’re gonna regret it for the rest of your life.” His actions even from the beginning of the movie show his manipulation of Caleb. This scene where Nathan pressures Caleb into agreeing to sign the contract is framed with Nathan literally talking down to Caleb. Caleb is sitting down on a chair while Nathan sits on the table right in front of him as he tells him what he should do. In a way, Nathan couldn’t have faith in his own creation; he felt the need to manipulate the test because he was eager to see someone other than himself accept Ava’s artificial intelligence.

Nathan’s treatment of the other characters in Ex Machina perpetuates the audience’s cognitive uncertainty because Nathan seems to view all the other characters as though they are artifacts. Although Nathan does not tell Caleb that Kyoko is a humanoid robot, he still talks about her as though she is less than human. When Kyoko very deliberately walks into Caleb’s room to wake him up, Nathan, as her creator, places her in the artifact ontological category, “She’s some alarm clock huh? Gets you right up in the morning.” Nathan creepily sees everyone in the movie as an artifact that serves his purposes. Towards the end of Caleb’s week at Nathan’s house, once realizing Kyoko is a robot, Caleb becomes confused about his own ontological category. He tries to peel his own skin off in the mirror, and slits his own arm. The blood flowing from his arm combined with the threatening, drum-heavy music makes the audience concerned about Caleb’s state of mind. He goes on to smear his blood on mirror and then punch his own image. Eventually, aleb discovers that he himself is also just a chess piece in Nathan’s plans. When Nathan reveals the possibility of Ava faking her emotions towards Caleb, Caleb responds, “So my only function was to be someone she could use to escape.” Sadly, Caleb finally realizes his status as an artifact, thus making it even more difficult for a viewer to resolve their cognitive uncertainty in regards to Ava, since the audience was previously counting on Caleb’s ability to do so. However, if Caleb questions his own ontological categorization, how is the audience supposed to trust his judgment about Ava’s ontological categorization? In addition, the audience is also questioning the use of the ontological categories available to them since the obvious human in the film no longer seems human.

At this point in the film, where even Caleb, a human, is looked upon as an artifact, one of the only ways to resolve the ontological ambiguity is for the audience to rely on what they believe to be the essence of humans. Zunshine makes the point that “essences of natural kinds seem elusive” but even so, the possession of morality seems to be one of the core traits associated with the essence of a human (Zunshine, 7). In fact, even the dictionary definition of humanity implies the association of morality with humanity. Merriam Webster defines humanity as of course, “the quality or state of being human.” But the second definition is “the quality or state of being kind to other people or to animals.” In “Why It’s Bad to be Bad” Paul Bloomfield discusses the reason why morality is associated with humanity. He goes on to state, “The harm of being immoral is that it keeps one from seeing the value of human life, and if one is human, then one is kept from seeing the value of one’s own life” (16). Essentially, Bloomfield is making the valid point that immorality conflicts with humanity because by being immoral not only does it show a lack of respect toward other humans, but it also shows an ironic lack of respect for the defining category of one’s own ontology.

While Bloomfield does make this argument, he is obviously not saying that all humans are incapable of being immoral. Nathan’s orchestration and manipulation of the entire Turing test, and his objectification of all the other characters in Ex Machina, shows that humans themselves are capable of exhibiting selfish traits. Yet when the possibility of Ava selfishly using Caleb to escape arises, it is difficult for viewers to categorize her as human, because since she is not actually a human, the audience is inclined to hold her to a higher standard. However, just as Nathan acted immoral in order to serve his own purposes, Ava’s justification was that she did not want to be restricted to Nathan’s secluded home anymore. Ava’s inherent need to go outside has been made clear to viewers throughout the entire film. One of the first discernible pictures she ever draws is of outside scenery. Also, when Caleb asks Ava what she would do in the outside world she says she would want to go to a busy traffic intersection in order to view human life. She also adds, “We could go together” and “I’d like us to go on a date.” While these aspects of Ava’s dialogue clear up her motivations, they also cause the audience to question whether she was really flirting with Caleb with the ulterior motive of escaping. Especially since later, after Ava kills Nathan, we see shots of her walking outside to her freedom pieced together with shots of Caleb repeatedly banging on the glass walls that trap him, glass walls that incidentally mirror the glass box he was stuck in when he first interviewed Ava.

The reason why Ava’s abandonment of Caleb makes it difficult for audiences to view her as human is because he was her ally in the film. Even though Ava also technically acted immorally by killing Nathan, this action is deemed forgivable by viewers because he was her oppressor. However, why would she abandon someone who helped her escape? Once Ava finally makes it to the stairs in order to leave Nathan’s house, she shows emotion by smiling during a moment where she finally gets her freedom. The triumph of Ava’s success juxtaposed with Caleb’s heartbreaking abandonment causes audiences to be conflicted as to whether they can grant Ava her humanity because they sympathize with both Caleb and Ava. Right down to the very last moments of the film, the viewer’s cognitive uncertainty remains unresolved. Zunshine’s assertion that Frankenstein complex stories are initially appealing because of the uncertainty viewers feel towards the character’s categorization is true. However, Alex Garland’s Ex Machina shows that contrary to what Zunshine says, viewers may not always be able to resolve their ontological uncertainty.

 

Works Cited

Bloomfield, Paul. “Why It’s Bad to Be Bad.” Morality and Self-Interest (2007): 251-71. Web.

“Humanity.” Merriam-Webster.com. Merriam-Webster, n.d. 17 Dec. 2015. Web.

Zunshine, Lisa. Strange Concepts and the Stories They Make Possible: Cognition, Culture, Narrative. Baltimore: Johns Hopkins UP, 2008. Print.

 
Back to Top