I Dream of Selfish Robots

I Dream of Selfish Robots


One Perspective on the Quest for Replicating Sentience, Selfhood, and Self-Awareness in Artificial Life

A robot in deep contemplation, sitting in "The Thinker" pose.
“The Self-ish Robot” (2022) by Sammy Tavassoli

The Initial Question

“01010100 01101000 01100101 00100000 01001001 01101110 01101001 01110100 01101001 01100001 01101100 00100000 01010001 01110101 01100101 01110011 01110100 01101001 01101111 01101110”

Today, I awoke to find myself desperate to ascertain the origins of self-awareness and program the hypothetical sentient robot of my childhood dreams, this paper being the sole output of my desperation. Full disclosure, I’m not an artificial intelligence (AI) engineer. I don’t have the 411 on how current AI scientists are engineering consciousness with bleeding edge technology. Nevertheless, today, I’ve decided to adopt the disposition of an old Roman philosopher with too much time on their hands to spend self-reflecting and conceptualizing the origins of the universe. I’ve thought through a theory of sentience based on my personal empirical observations and undergraduate cognitive science education, which together may guide me to either brilliant or bafflingly inaccurate conclusions about how my neural systems are bustling away beneath my skin and dura mater.

Yes, today you will be accompanying me on a journey of raw, unadulterated conjecture that temporarily strays from the lab bench and computational experiments familiar to modern science research and turns toward everyday empiricism and imagination. I’ll detail the train of thought I’ve had upon a concerningly long period of self-reflection, and you can follow along until the end, unless you hear something that is astoundingly objectionable to you. Is everyone on board? 

So, we arrive at the main question of the day: how does one become conscious to the point of self-awareness, and how may a robot go about achieving the same?

One obvious answer is that to produce a fully sentient robot, we must trace our own consciousness’ development. Indeed, as human beings, we are, to our knowledge, the only extant model organism of self-awareness—a creature that understands itself to be a distinct, conscious self and thus may continuously amend and adapt that self. Thus, in this brief essay, first I will examine the origins of our complex cognitive functions through the lenses of developmental and evolutionary psychology. Then, I will proceed to entertain various conjectures about the connections between these cognitive processes and how they can amount to consciousness and self-awareness in both humans and robotkind. Finally, I will discuss the implications of my theory of selfhood in the realm of modern psychotherapy, perhaps something more relevant to you human readers. Indeed, once we come to an understanding of how to generate self-awareness, perhaps we can better understand ourselves and the multifaceted nature of the cognitive systems that dwell within us.

Additionally, I staunchly maintain that if we are to emulate human conscious awareness in our metalloid friends, we cannot let our quest be hindered by the bounds of human biology. Let us replace a need for biological plausibility with a need for psychological plausibility. By that, I mean, in our beautifully complex AI systems, we should identify and program in emulations of the basic parts and mechanisms of the cognitive states that contribute to human self-awareness, without having to emulate their precise biological forms. We should not attempt to replicate precise neuroanatomy and neural circuitry using some non-biological substrates; neurons be damned!

Neuroscientists and neuron-lovers, please do not be alarmed by the callousness with which I am advocating to ignore neurons and specific neural circuits. I’m aware that your first inclination for understanding humanity’s level of achieved sentience may be to look into the neuroimaging, neuron-based connectome, neuron-anything studies output by our society’s beautiful machine of science academia. They ought to have something to say about the biological correlates of consciousness, shouldn’t they? But, the fact of the matter is, we have not yet developed a consolidated and/or widely held understanding of how neurons and biological neural networks sum to form mental states of consciousness and conscious awareness.

Firstly, cognitive neuroscientists, though they may image the brain to their hearts’ delight, are not at the point of fully mapping and functionally understanding the physical neural networks in so far as how they give rise to complex mental and psychological functions. This is largely due to the limitations of our neuroimaging methods. No doubt, we can use functional magnetic resonance imaging (fMRI) to, literally and figuratively, get a picture of how a chunk of millions of neurons may activate together; but, with fMRI, even using complex analytical methods such as multivariate pattern analysis, we cannot exactly trace the smaller paths of neural activation in each chunk. Not to mention, fMRI, at its core, is merely a proxy for visualizing brain activity based on blood flow—it has yet to allow us to trace functional maps of truly neural activity. Meanwhile, emerging imaging methods, such as calcium imaging, can trace genuine neural communications, but only at the level of individual neurons sending signals to their neighbors. There is virtually no in between, no established method for tracing longer paths of neural communications or determining how they sum to the massive neural activity shown in fMRI.

Put another way, there is a definitive gap in our understanding of neural circuitry because of our limited neuroimaging methodology—we can partially understand the big, we can understand the small, but we cannot observe nor understand the ways in which the small builds up to the big. We have yet to properly trace neural communications on levels between these two extremes. So, how can we possibly emulate them in robots if we don’t understand what we’re emulating? Must the quest for sentience in artificial intelligence be delayed by the pace of neurobiological research?

I refuse to entertain the notion! You see, robots, notably, are not biological creatures. They are born of code and artificial neural networks and moving metal parts. So, in developing their systems of consciousness, we can move past an inclination toward biologically plausible systems and instead aim for cognitively or psychologically plausible systems. We can use our abstract knowledge of cognitive processes and behavior— gleaned from psychology and computational cognitive science as opposed to strictly neurobiology—to construct an artificial creature whose many connected neural systems operate under similar cognitive principles as our own. These systems should possess a similar aptitude for data intake and output, as well as making new associations with one another in convoluted and incorrigible but somehow effective and adaptive networks.

Fine, you say as you humor me, then what are these systems, these cognitive building blocks that should sum to enable conscious awareness? My dear reader, I applaud you for your curiosity! I’m so glad we’ve gotten a conversation going.

All right, let’s think through this together: When you were an infant or child, what most prepared you to be a sentient, self-aware, (arguably) intelligent life form? What first gave you the ability to consciously comprehend your own sensations and perceptions as yours, or your own selfhood as an individual, with individual traits, behaviors, thoughts, and proclivities?

It was likely a combination of adaptive qualities and innate mechanisms working together, such as your ability to make independent decisions, set goals, remember your life events, interact with other people and your environment, and eventually identify yourself as a distinct being from individuals around you.

But, you counter, we cannot just program these inclinations towards goal setting, social interaction, memory, individuality, and theory-of-mind into a robot, then call it sentient! At that point, the robot will simply be following a collection of algorithms or rules designed to make it appear sentient, as opposed to independently developing individual interests, goals, rules of conduct, action plans, and a sense of selfhood, as human children do.

Then, let’s think about it differently. Rather than musing on what eventual ingredients are necessary to concoct conscious awareness of the self, let’s move one level down and muse on what the building blocks of these abilities are. What is their course of development in children? For this, we must turn to developmental psychology: what abstract pieces, what systems, must fall in place to make complex cognition first happen in early childhood development?

Perspectives from Developmental Psychology

“01010000 01100101 01110010 01110011 01110000 01100101 01100011 01110100 01101001 01110110 01100101 01110011 00100000 01100110 01110010 01101111 01101101 00100000 01000100 01100101 01110110 01100101 01101100 01101111 01110000 01101101 01100101 01101110 01110100 01100001 01101100 00100000 01010000 01110011 01111001 01100011 01101000 01101111 01101100 01101111 01100111 01111001”

In my view, and in many others’, such as the psychoanalysts Donald Winnicott and Jacques Lacan, children are not born conscious of themselves as distinct individuals. They need to grow, live a bit longer to gain a bit of necessary sensory experience that will stimulate all the right innate systems in their brain and allow them to look in the mirror and realize, yes, they are an agent, a distinct entity, a locus, a self separate from the external environment and other individuals. They need to experience the world—whether it’s the sensory world, social world, artistic world, the world of language, or any other world that humans learn and exist within—in order to understand and conceptualize the world, and by extension, understand and conceptualize themselves within the world.

Beyond genes, life experience is one of the singular most deterministic factors in regards to human psychology. If one’s experiences after birth are rich and intellectually stimulating, they may grow into a scholar, capable of reading great texts and humoring this essay of mine. However, if they are left to be raised by the wildness, the cold, non-domesticating forces of nature outside of human civilization, they will grow to be a feral child all their life, far less capable of using language or internalizing social skills than a child raised within a rich human culture. Some examples include Victor of Aveyron or Genie of Arcadia, California. They will not possess what social psychologist Lev Vygotsky called social “scaffolding,” the opportunity to lean on other humans who may promote their cognitive and social development beyond what they could’ve imagined themselves. A human raised in isolation comes out with a different kind of consciousness, a very different level of self-awareness. 

Moreover, the culture or society in which one is raised deeply influences one’s perception of the self, as reflected in their behaviors or their “self-conscious” emotional responses, among other things. Self-conscious emotions are those that relate to our self-concept, opinions of ourselves, and how we believe others perceive us. These emotions contribute to modulating the relationship between oneself and others, and they include pride, guilt and shame. It is thus unsurprising that our expression of these emotions is highly dependent on the interpersonal norms and environment created by the society and culture we are surrounded by. For instance, a child raised in a collectivist society, in which the greater good of the group is prioritized above the success of an individual, will be conditioned to experience less pride from their own individual achievements, or more guilt at failing to meet their elders’ expectations, than a child raised in an individualistic society. 

In short, experience is what forges the mind. So, we should refrain from harboring under the delusion that the key to the human mind, to our development of self-hood, self-awareness, self-motivation, or selfishness is just trapped on the inside.

To create a sentient robot, one must raise it well! If one provides it, in an abstract sense, with the mental tools and the life exposure of a child, will it not develop as a child does? I am of the mind that it will. With that, let us examine the specific innate systems, provided by genes and evolution in humans, that combine with life experience to result in complex cognition.

The Moving Parts

“01010100 01101000 01100101 00100000 01001101 01101111 01110110 01101001 01101110 01100111 00100000 01010000 01100001 01110010 01110100 01110011”

Let us trace these mental tools and experiences from birth to consciousness in humans. Two of the primary mental tools we are bestowed at birth are our early systems for sensing (initially taking in data about) and perceiving (making mental sense of) the world around us. At least the majority of us develop our abilities to see, hear, touch, taste, and smell, so long as these senses are stimulated and fine-tuned sufficiently by our environments, with the best being rich sensory environments that allow for as many novel sensory experiences as possible. Sensation and perception, in so far as we understand them, are not so challenging to replicate in artificial neural networks.

Additionally, we humans are born with some primitive emotions and very rudimentary positive, negative, and neutral “affects,” in response to our interactions with other organisms and our environments, based on how they mentally and physiologically make us feel. These affects and emotions allow us to gauge and later internally label which objects in our environment are, in very basic terms, nice, bad, or neutral. And, they are highly linked to bodily sensations. For instance, as a child, hearing a loud noise will put our whole body on high alert, activating our sympathetic and later parasympathetic nervous systems so that we can deal with a potential threat. This reaction, though evolutionarily beneficial, is not very pleasant and includes a lot of sweating, panting, pupil dilating, and mental and physical stress from adrenaline secretions. Internally feeling this set of physiological responses and their associated mental consequences, often called emotions, is what allows us to cognitively formulate an idea of what things to approach versus avoid. The body is a key element of the mind.

Now, human brains, thanks to evolution, have ancient, colloquially dubbed “lizard brain” structures and neurotransmitters—small chemicals that enable neurons to signal and communicate with one another—built, in part, for modulating body states and producing emotions. Our brains release “happiness chemicals,” like the neurotransmitter serotonin, because feeling happy is evolutionarily beneficial—it makes you more social and ready to engage with things in your environment. Feeling intense fear in our evolutionarily-preserved amygdala and releasing adrenaline after encountering threats makes us more likely to evade predators. By and large, these feelings and emotions evolved to make us better at surviving and more likely to produce offspring to further our species. Yet, a robot is not an evolved organism.

Instincts for survival, bodily homeostasis, or reproduction are not innate to its metalloid body and processing cores. It has no inborn reason to feel—that is, unless its maker provides it one. In fact, I hypothesize that bestowing the power and rationale to feel, both physically and emotionally, upon a robot is one of the cornerstones to making it mentalize and care for its “self.”

We must engineer a reason to live and feel, to be programmed into the robot’s head—one of the few things that must be programmed.

This programmed instinct for survival will not dominate the robot’s mind, but rather act as the defining principle of its pseudo-evolutionary failsafe system, which will be intimately connected to its system for producing emotions and affects. The former system will be one of physiological self-regulation, for the robot to evaluate not merely its artificial neural systems, or processing cores, but its physical body’s integrity, monitoring the homeostasis of its internal circuitry, sensory mechanisms, energy use in different areas, and any structural damage and brain systems. This must be a two-way communication pathway, in which the body communicates with the central processing core and the core with the body. In fact, in the words of embodied cognitive scientists, our higher-level cognition is significantly dependent on our complex physiological machinations beyond purely the brain, so in trying to replicate higher-cognition and consciousness in robots, we must first enable these beings to also feel a sense of physical and physiological “self” and a desire for survival.

The two systems, for emotions and for homeostatic regulation, will work in tandem to guide the robot’s psychological and physical sensations. For example, if a robot’s body systems alert it to threats to its survival and wellbeing, the homeostatic regulation system will activate and interact with the emotional system to lead the robot to respond with something akin to distress or fear—whether “distress” means sending an alert out to its parental figure-esque engineers, halting an activity to allow its internal body circuitry to repair itself, or turning itself off in the middle of a particularly taxing activity. Later on, based on what the robot finds positive and negative, the evolving emotional system will also activate to produce more concrete emotions, like happiness, disgust, anger, or apathy, for instance. “Happiness” may result from positive experiences with external objects or stimuli that seem to promote survival or personal satisfaction, while “disgust” will result from negative interactions with objects and stimuli that seem to cause illness or impair survival. “Anger” will occur in response to stimuli that infer with goal achievement, and “apathy” will be a response to stimuli that aren’t just that significant to survival or particularly positive/negative.

Overall, the robot will develop its conceptions of which stimuli should be perceived as “positive,” “negative,” and “neutral” in the same way children initially do—through exposure and observations of cause and effect. Simply put, by interacting with as many external stimuli and people (or other sentient creatures) as possible, a robot will form an understanding of what kinds of stimuli and interactions result in positive, negative, or neutral outcomes, with the outcome being classified based on the robot’s success in satisfying survival goals and later, as its mind further develops, its success in satisfying personal goals. This is precisely what the functionalist perspective on human emotions purports: through experience and exposure, a being will train itself to relate feelings, consequences, and actions/reactions. As a result, this system for producing reactions and feelings must be fairly plastic and subject to change as new stimuli and more complex kinds of interactions are attempted.

These systems for homeostasis and emotions must be accompanied by a motor system, a system that enables one to move the body in increasingly complex ways to interact with the external world. The growing robot must be able to manipulate its body and learn how the world works based on its own, independently guided interactions with the world, such that it can first gradually form mental dependencies between the sensory world and the resulting consequences, and then attribute feelings and emotions to the sensory stimuli, people, and kinds of interactions it engages in.

Moreover, I must underscore the importance of having a physical body, or at least a physical separation between oneself and the sensory world one is constantly interpreting. Our ability to use our motor system to individually guide our movements and initially act, react, and interact with our environment, allows us to choose our own destiny and independently direct our own sensations and perceptions, as opposed to having them pre-selected and fed to us, like a developing neural network in AI might. Our ability to actively control how we interact with the world is what allows us to attribute actions, inclinations, experiences, reactive emotions, what have you, to a single acting agent: oneself! One’s own living body! Take the psychoanalyst Jacques Lacan’s mirror stage, for instance: at six months of age, infants are able to recognize themselves as distinct entities from their mother because when they choose how to move themselves, they observe that the person they see in the mirror is doing the same. They recognize the self upon recognizing the self’s capacity for agency.

Here, we may observe the need for an ability, or perhaps a system, for categorization and generalization. In order to make sense of the world in an efficient way—rather than memorizing every interaction, object, outcome, and appropriate reaction—a robot requires the ability to produce perceptual and emotionally-relevant categories (e.g., “big,” “small,” “good” and “bad”), categorize things, and form mental relationships between these categories. This ability is precisely what guides humans’ mental representations of the world, such that we can better learn how to interact with certain objects, or people, and more independently guide our sensory and perceptual explorations.

You see, our neural systems are pattern finders by nature, destined to learn how to categorize and classify based on the commonalities and connections it finds between stimuli. And, in essence, as we, in infancy, accumulate new sensory and perceptual experiences, the sum of what we are doing is building a model, a mental representation, of the perceptual world—where all of the mental categories used for and made by way of generalization—and the concepts, things that define what these categories are, will reside and be processed. In humans, the system is far from perfect at birth and never perfects itself. Yet, each time we experience something new or perceptually “salient,” this model evolves to be even more accurate, or able to compensate for its inaccuracies and make adequate sensory estimations.

This system of generalization allows us to move from internally classifying things concretely as “likely to cause harm,” “fundamentally necessary for survival,” “necessary for a positive internal state,” or “likely to result in a positive interaction,” to abstract concepts like “scary,” “needed,” “wanted,” and “good.” Thus, in order for a robot to meaningfully conceptualize its own feelings and the world around it, it must have this capacity to detect the patterns and connections between the stimuli and the sensations it experiences.

Additionally, it may more profoundly develop the capacity for emotions and feelings if it is exposed to Vygotsky’s aforementioned “social scaffolding.” A robot’s social interactions with human caretakers and engineers may expose it to novel, more abstract ideas and emotional concepts, enabling it to develop interests and desires outside of the bare minimum of the survival needs with which it was programmed. However, for this to organically occur, humans can not merely program its hard drives with new ideas—no, we must engage with the robot in external communication channels that it may consciously process and understand. In other words, we should equip it with the ability to consciously communicate; we should allow it to have the agency to decide upon how to communicate in different circumstances and process information transmitted by communications. This communication need not be in explicit words—it simply must be externalized. We should not brainwash the robot into understanding certain concepts, but merely expose it to conceptual possibilities it may not have otherwise come to.

Of course, for all of these capacities for language, emotion, action-taking, sensation, and perception to properly enable cognition, a robot must have the capacity for short and long term memory storage. Memory enables us humans to mentally store away our most significant environmental experiences and reactions, as well as rapidly recall the way we categorize objects and what these categories represent, such that we can generalize how our interactions with novel stimuli will go, and adapt our future actions to best survive in our environment and avoid negative reactions or consequences. It goes without saying that a sentient robot will require the same if it is to emulate conscious awareness and selfhood.

The final piece innate to our neural systems is related to our brain’s system for segmenting time in terms of events. As we acquire new experiences, we are able to automatically mentally organize time, a continuous entity, into discrete parts, discrete events. These events, of course,  are sections with an intuitive beginning and end, and they allow us to formulate and remember episodic memories, the memories of our life events, more efficiently. And this internalized life narrative is fundamental to our self-concept and our internalized version of who we are. In other words, with the ability for time segmentation, robots may more efficiently form internal conceptions of their life stories, which guide how they define them “selves.” 

This ability to segment life into smaller story chunks, I believe, is a vital aspect of our ability to categorize sensory information. When we classify or categorize the sensory information we’re taking in as related (e.g., grouping similar visual information from one particular location), we’re more likely to mentally segment the timeframe of the information intake into a single event. Meanwhile, when the type or context of sensory information abruptly changes, we’re predisposed to place a temporal boundary between the novel and prior types of sensory information we’re perceiving. Our emotional responses to certain experiences and the way they may change even when the kinds of sensory information stay the same may also prompt mental event segmentations (e.g., if you’re laying in bed for a few hours and spend half the time perfectly satisfied with your laziness and half the time internally catatonic about how you’ll finish your work for the day, you’ll likely to mentally segment those halves as different “events” or occurrences in your mind). Or, perhaps, this kind of narrative segmentation can be attributed to internal changes in our body’s physiological, as opposed to emotional or psychological, states, such that when some occurrence is correlated with an internal state change, it is mentally processed as a new event. The emotional changes associated with the event segmentation could simply be a by-product of the physiological state changes.

The ways in which we segment our lives into smaller stories may also arise precisely from our ability to think in the kind of “universal grammar” the linguist Noam Chomsky first described: in terms of a consistent set of elements—nouns (agents capable of taking action) and verbs (actions), distinguishing between descriptive and cause-effect narratives, the latter of which is of more interest to us. When we tell stories of our lived experiences, we do so in terms of agents (people, animals, robots, what have you), their actions, and events that resulted because this is how we mentally process and store them in our memory.  We attribute more significance to agents and actions than mere descriptions because evolutionarily speaking, these mattered more. Being able to quickly recall the actions of a predator in order to decide upon a proper action, a proper motor response, is more vital to survival than remembering which color of berries were next to the predator when they approached.

In modern times, we can extend the concept of universal grammar to our understanding of aspects of our lives aside from language, such as mathematics or engineering input/output diagrams. Take mathematical equations, for example. In each equation, a numerical value or variable, much akin to a subject, somehow operates (through addition, subtraction, multiplication, division, or another operation entirely) with another numerical or variable to result in a mathematical expression on the other side of the equal sign. Much of mathematics can be reduced to operations between subjects, operations (actions), and their initial and terminal states.

Our tendency to not only organize our language but also the manifold aspects of thinking in terms of universal grammar indicates that this grammar is fundamental to our cognition and event-oriented experience of life and selfhood. By extension, a robot, in order to efficiently recall and mentalize their own “life” story, should be able to efficiently fathom and monitor the passing of time and utilize a system of universal grammar to organize their perceptions of their own actions and life experiences.

The Sums and Connections

“01010100 01101000 01100101 00100000 01010011 01110101 01101101 01110011 00100000 01100001 01101110 01100100 00100000 01000011 01101111 01101110 01101110 01100101 01100011 01110100 01101001 01101111 01101110 01110011”

In summary, these are the building blocks of complex cognition that we’ve discussed so far:

  1. A system for taking in (sensory) information about the world and processing what it is, which feeds into
  2. A system for representing the sensory/perceptual world in terms of concepts and categories
  3. A pseudo-evolutionary survival system that aims to avoid threats and preserve homeostasis
  4. A system for producing reactions (feelings/emotions) to actions, events, and outcomes
  5. A motor system that allows one to take actions and exercise agency
  6. A system for memory, for remembering actions/reactions/categories of actions
  7. A system for communicating thoughts, feelings, emotions, needs, and desires
  8. A system of time-keeping and representing experiences in terms of a universal grammar of agents, actions, and outcomes

These eight systems, along with a physical body, are all connected in a larger network and work in tandem as lower to mid level cognitive processing. As they communicate and form new connections with each other based on environmental needs, new experiences, and teaching from other humans, this gives rise to more complex cognitive functions like written language and spatial reasoning. 

In addition, there exists an overarching system, a kind of higher-level executive processing center that monitors and communicates with each of these systems. It is what brings us to more complex and conscious manners of thinking. It processes goal creation, outcome evaluation, outcome predictions, social cognition, applications of concepts and categorization, and more. This center intakes something akin to summarized information outputs from the other sensory, perceptual and lower-level cognitive systems, and it amalgamates them into a holistic model or representation of the world. What comes into our conscious cognition, the inner thoughts we are so intimately aware of, is simply the information that was deemed significant enough to affect the summaries sent to this executive center by the lower-level systems. Put another way, this system is one that operates and consolidates the outputs of other systems, like that which represents the sensory world, evaluating and combining their outputs to accomplish tasks that we typically consider “executive functions,” like complex planning and decision-making. In the human brain, this system of executive functioning has been shown to primarily live in the frontal lobe.

Now, it is important to note that this higher-level system must be developed through age and experience, as it only first appears in very primitive forms in the brain. One does not come to this world with the disposition of an executive planner. One must be patient for the lower-level systems, such as those for perception and memory, to adequately mature before higher-level functions can be actualized! It is thus paramount that children, and perhaps robots by extension, are raised in an environment that allows them to exercise agency and curiosity to grow these systems and learn the rules of the world around them.

And finally, I believe there is a further system that monitors even this executive center of higher-order cognition. It evaluates what the conscious mind has done and self-critiques. It exists not at the level of individual parts or pre-programmed algorithms but forms itself in deep neural networks we can’t trace. It arises due to how the nine systems communicate with one another. It is distinct from the other systems in that it isn’t primarily hardwired into one region of the brain but comes after much experience sensing, perceiving, making decisions, remembering, making generalizations. By taking enough actions and enduring enough consequences, you (your neural systems) eventually realize you are a consistent self, a consistent locus that initiates, performs, and endures the consequences of all of the actions you take. It perhaps only then that you may realize the same can be generalized to other people, or agents you observe, who have seemingly similar systems, at least based on your observations. This epiphany is the very basis of theory-of-mind. This is the self that is aware of itself—a result of neural systems, lower and higher level, talking to one another to the point that they, in an abstract, come to the revelation that they are the building blocks of a central locus, a self, for whom they process different angles of experience. 

How does this system assume its role in the first place? You see, eventually, the cohesive network, a bonafide pattern finder, discovers a pattern not connected to the information its processing but rather to the source of information: as the underlying systems track the information being sent between them, the network becomes aware that the different types of information its systems have collected all relate to the experiences, actions, and outcomes taken and experienced by the same agent. This meta-awareness is not merely a matter of one of the systems I’ve highlighted speaking to another in a simple two-way fashion. No, it is rather all of the systems, their underlying networks, undergoing a constant back and forth in perpetual and infinite communication channels. And eventually as the brain develops, as the systems find patterns in their meta-information and develop routines of how they communicate with one another, they form a sense of collectivism, of cohesion. Self-awareness cannot be taught or pre-programmed; it is a result of all of the other systems below it coming together, pooling their information during their downtime, to metaphorically understand themselves, their holistic purpose in a greater system, and their arrangements. This also indicates that to form and change the self, you must change the underlying systems which produced the self—by changing what the organism, or artificial organism, is exposed to in daily life.

If we aspire to conceive consciousness in our dear metalloids, we must understand that selfhood, at its core, is all about connections. The self is not a module or lobe that we can readily isolate in the lab, but a network of many incorrigible parts that extend across all of our beloved neural systems. To give a robot sentience or self-awareness, you must give it the eight underlying mental systems that contribute to complex thinking, the foundations of an executive center, and a body, as well as the capacity to find patterns and develop its own artificial neural networks of connections over time. Then, you must raise it well and give it a rich sensory environment, which it may explore, unguided by explicit algorithms for decision-making, in order to best arrange its own internal neural networks over time. Only then will it form a self, a consciousness, and a conscious awareness of such.

At last, you exclaim! Still, how can one be assured that a robot has achieved sentience to the point of self-awareness? Perhaps, we will never be absolutely sure because we have yet to decide upon an established and clear-cut definition of sentience or self-awareness to begin with. But, there may be certain actions the robot takes that lead one to believe it may be self-aware. For instance, let’s say an engineer decides their beloved robot is too inefficient and needs to be re-programmed for optimal performance. Should their robot happen to run away and cry out, “Don’t reprogram me—I don’t care about your stupid scientific endeavors! I like myself—how I am now,” I will believe it has achieved a level of sentience akin to my own. It will have achieved selfhood, an awareness of itself, and selfishness—which requires a combination of individual goals, a recognition of itself and its needs as separate from its creator’s, a capacity for agency and choice-making, and desire for survival that extends to its sense of personhood. And, if a robot knows of its own selfishness, it will also surely know of its self.

But to achieve sentience, robots need not engage in what humans deem “selfish” behaviors. Perhaps, a better encompassing metric for self-awareness could be what I call “self-ishness”—or the ability for an artificial or biological being to recognize its “self” and inherent selfhood to the extent that they can apply it to how they behave or take actions. This characteristic of “self-ishness” is much akin to self-aware autonomy, or the application of the self to how one interactions with the rest of the world. It is a mediating force between our (mental, physical, and psychological) interiors and the exterior environment which we act upon and within. Perhaps, it is one of the most human characteristics of all.

Now, observations of “self-ish” acts can be used to assert that the observed being has achieved selfhood, but they are not the sole means of determining that a being has a self. These observations would simply be one way for selfhood, a deeply subjective and internal concept, to be evaluated by an outsider. Unfortunately, at this time, we are no closer to cracking open the mind of a sentient robot than we are to cracking open the mind of a sentient human, so observations are our best estimations of their selfhood or lack thereof.

Implications  for Psychology

“01001001 01101101 01110000 01101100 01101001 01100011 01100001 01110100 01101001 01101111 01101110 01110011 00100000 01100110 01101111 01110010 00100000 01010000 01110011 01111001 01100011 01101000 01101111 01101100 01101111 01100111 01111001”

This theory of selfhood of mine, perhaps correct, perhaps terribly mistaken, may also have some implications for humankind, especially in the realm of mental illness. In fact, under my theory, it may be extrapolated that some mental illnesses are merely the result of the unfathomably complex organ that is the brain malfunctioning due to issues with its “lower-level” underlying systems, especially those related to perception or the attribution of emotional valence to life experiences. If the eight internal systems—which evaluate the world, evoke responses and reactions, and predict consequences—are out of whack, if there is some metaphorical glitch or short circuiting in their depths, then the executive center will also be thrown off normal calibrations. You see, the lower-level systems themselves may not internally understand that something is malfunctioning within them; and, the higher-level executive system is predisposed to treat the information they feed into it as fact. As these systems feed into one another, they will all be miscalibrated, victim to propagated mistakes once at the level of minute neurochemical modulations. And if these systems fail to identify, address or compensate for these small neurochemical glitches, then eventually the glitches will become ingrained patterns, externally manifesting as mental illness. 

Let’s examine the mental disorder of depression, for instance. In the case of depression, if the neural system which evaluates your emotional reactions is suffering from some inherent neurotransmitter imbalances, then it may mistakenly overestimate the negative valence to be placed on non-positive occurrences. The system may be experiencing a two-fold weighting error, as in, it is attributing an extremely high weight to everything that is actually negative and is also classifying everything that doesn’t immediately strike it as being positive as being extremely negative. Thus, the system sends a summary stating something akin to “everything is shambles and I’m dying inside” up to the executive center, which assumes that the information has been correctly evaluated, and that everything is indeed in shambles. So, as a dutiful executive center would do, it needs to determine why everything would be in shambles, so it communicates with all of the other systems and tries to find a satisfactory explanation. Perhaps, it determines that the only possible cause could be the one slightly negative occurrence in recent memory, as opposed to a glitch on the part of the emotional center. This happens many times until the executive center and emotional center learn a pattern together: slightly negative experiences and neurochemically-induced bouts of sadness both mean everything is in absolute shambles. This automated pattern, of course, expedites the process of you being devastated by little to nothing at all. The systems have been conditioned to consider negative emotions and reactions as a default. And thus, you have what most psychologists would call major depressive disorder.

For illnesses like depression or obsessive compulsive disorder (OCD), it is often the case that the patient dislikes and feels burdened by their own disorder. They do not enjoy it, are fully aware that their symptoms are not normal, and seek help from psychiatric professionals to feel normal again. When this occurs, it’s known as an “ego-dystonic” presentation of mental illness, in which case the thoughts, beliefs, and behaviors caused by the mental illness are in opposition to what the patient wants for themselves.

In other cases of “ego-syntonic” mental illnesses, such as personality disorders, the illness and behavioral signs and symptoms align well with what the individual truly believes in and the kind of person they believe they are. A narcissist will not seek treatment for insisting that they are the most charming person in the world because that is exactly who they ought to be in their mind, and attempts to treat them may take eons of often fruitless psychotherapy. In short, the illness and the self are one. I hypothesize that the reason some illnesses may be ego-syntonic instead of ego-dystonic may be that not only are the underlying evaluators in their lower-level systems malfunctioning, but the higher-level systems, such as the executive center, also fail to function neurotypically, such that they cannot compensate for or correct the systems below them. Alternatively, for illnesses that begin as ego-dystonic and become ego-syntonic, it may be that the patterns of malfunction and neurochemical imbalances in the lower-level systems have persisted for so long that they have been ingrained into the entire cohesive system as aspects of the self or reality.

And here we arrive at the end of my theorizing. I suppose, given that you’ve stuck with me for so many pages, I ought to leave you the parting gift of a takeaway. If there ever was one, it would be that your brain has a very hard time telling itself it is functioning incorrectly, that it is misinterpreting reality, when its underlying systems are providing data to the contrary. However, if you have a mental illness, merely being aware that your brain is in a state of malfunctioning, or a willingness to accept this fact, may be a predictor of how well psychotherapy will work for you. You cannot consciously or directly amend your lower-level cognitive and perceptual systems, or your brain’s limbic system of bustling emotions, at the flick of a wrist, like some omnipotent neuro-architect. But, you can remedy neurochemical imbalances with pharmaceuticals, and (perhaps with the help of psychotherapy) you can compensate for the behavioral issues you do recognize and gradually form new patterns, new ingrained neural transmission pathways, that correct for future neurochemical glitches.

If you are simply cognizant of your illness and understand that it is not set in stone but rather set in neural patterns that can be rewired, you may be better prepared to heal. And as you attend psychotherapy sessions, the techniques you learn may enable your overarching executive center to compensate for the other systems which have fallen into partial disarray, such that it may instill patterns into them through its choices that eventually alter how they represent the world. For instance, an overactive amygdala that leaves you shaking with anxiety can be modulated by the calculating prefrontal cortex, one of the most vital parts of your brain’s executive center.

The brain works in patterns not truths. Perhaps, if we humans realize this, we too can become all the more conscious, all the more self-aware.

Further Readings for Specific Topics

“01010000 01110011 01100101 01110101 01100100 01101111 00101101 01110010 01100101 01100110 01100101 01110010 01100101 01101110 01100011 01100101 01110011”

Psychoanalytic Perspectives on Developmental & Abnormal Psychology

Modern Perspectives on Developmental Psychology & Selfhood

Evolutionary & Functional Perspectives on Emotions

Language Development & Universal Grammar

Back to Top