(Published in the Mensa Bulletin, Nov/Dec 2023 issue)
By Michael Alexander
There’s a scene in the BBC science fiction sitcom Red Dwarf where the mechanoid character Kryten is being taught how to lie by his human shipmate Lister, all to make Kryten seem more human. This is a challenge for Kryten, who is compelled by his programming always to tell the truth. As Lister explains: “If you can’t lie, then you can’t conceal your true intentions from people … and sometimes that’s essential.” Deception is part of a human being’s psychological defense mechanism — without it, we’re emotionally naked.
Kryten eventually succeeds in breaking his programming, summoning up the ability to lie that an apple is actually “the Bolivian Navy on maneuvers in the South Pacific.” He uses his new-found emotional weaponry to deceive — and ultimately save — the woman he loves (who turns out to be a genetically modified blob being, but that’s beside the point).
As the AI revolution continues unabated, I have been fascinated by the potential of artificial intelligence to transform our interactions with technology. One exciting and somewhat controversial aspect is the ability to train AI systems in emotional intelligence, or EQ, i.e., “doing a Kryten.”
I want to share my company’s journey of giving ChatGPT emotional intelligence and, by doing so, quite possibly unleashing its full potential.
Emotional Intelligence: A Personalized Experience
EQ refers to the ability to understand and manage our emotions effectively as well as understand and relate to the emotions of others. It involves being aware of our own feelings, recognizing and controlling them constructively, and being able to empathize with the emotions and experiences of others. Therefore, someone with high EQ can navigate stressful or challenging situations with a deft emotional touch, avoiding impulsive reactions while making thoughtful, sensitive decisions.
These traits and actions sound extremely difficult for a large language model (LLM) such as ChatGPT to replicate because emotional intelligence goes beyond mere language processing. It involves perceiving, managing, and regulating emotions to facilitate human-like thinking and decision-making. We knew that by incorporating EQ into ChatGPT, we could enhance its ability to empathize with users, establish meaningful connections, and deliver a truly personalized experience. What we didn’t know was where to start.
And What’s ChatGPT Again?
ChatGPT is a LLM developed by OpenAI. Trained on diverse internet data, it can understand and respond to a wide range of inputs. ChatGPT is designed for conversational tasks, providing information, answering questions, and assisting with various tasks. For example, it wrote the paragraph you just read.
What else can it do? The most straightforward (and least interesting) use of LLM is as a souped-up search engine — having it provide answers to factual questions, explain concepts, or help you find information on various topics. Let your imagination roam free, and you can leverage ChatGPT for all sorts of wildly fascinating use cases. It can build you a travel itinerary, help you write a novel, or even teach you a new language. The instructions you input into ChatGPT are called prompts, and the better the prompt, the better the output. It usually takes several attempts to get a prompt working to maximum effectiveness, so you’ll often hear the phrase “iterative prompt engineering.”
Ask ChatGPT itself what it can do, and you’re furnished with a surprising answer: “Sometimes you might just want to chat with someone. ChatGPT can engage in friendly conversation, share jokes, or provide a listening ear.” This conjures the image of someone lying on a leather couch while a monitor sits on a chair and displays the message: “EXPLAIN TO ME AGAIN WHY YOU HATE YOUR MOTHER.” It seems ChatGPT is keen to provide emotional support for humankind, but something tells me that we aren’t quite ready to divulge our innermost thoughts to lines of computer code.
Unless … we felt those ones and zeros truly understood us. Could that even be possible?
The Students Versus Mr. Smith
Emotions play a vital role in all forms of communication. We wanted ChatGPT to understand these emotional cues and respond in a way that felt authentic and relatable. This type of connection is crucial in building trust, fostering engagement, and ultimately providing a more fulfilling user experience.
Why was that important to us? At my company, Audirie, we build simulations that assess professional skills in many diverse fields, including human resources, health care, and sales. An example I’ll reference throughout this article is our Pharmacy Student Simulation. This protocol assesses the medication counseling abilities of pharmacy students and grades them across a range of dimensions. Being assessed is a profoundly uncomfortable and stressful experience at the best of times. It can be made even more challenging when someone is forced to interact with an unfeeling robotic entity. Just ask anyone who’s called Comcast customer service.
By enabling ChatGPT to display emotion, we hoped to make the simulated counseling experience more authentic. To try and accomplish this, we programmed our pharmacy assessment avatar — affectionately referred to as Mr. Smith after the power-mad computer code of The Matrix movies — to be rude, pushy, and impatient with our Australian-based pharmacy students. This required a lot of iterative prompt engineering, but we eventually landed on the right tone. Mr. Smith needs medication for a chesty cough, but he doesn’t need a lecture. Especially not by some snot-nosed, young, upstart pharmacist like you.
To be honest, we fully expected these tech-savvy, college-aged leviathans of pharmaceutical knowledge to casually brush aside the faux emotions we had programmed into Mr. Smith. Wouldn’t Gen Zers, raised in cyberspace and nursed by smartphones, inevitably adopt a “meh” attitude to AI-simulated annoyance? We girded ourselves for pronouncements of “This is hella lame, bro,” and “Weak sauce, fam.”
We couldn’t have been more wrong.
The students strode up to our simulation with the confidence of unchastised youth. Many of them deftly ignored Mr. Smith’s initial pronouncements of being in a rush and not having any time for idle chitchat. But as Mr. Smith continued to harangue our students, something interesting happened: They began to sweat. And not just metaphorically.
They tripped over their words. They paused frequently, looking around for assistance as if someone might save them from Mr. Smith’s aggravated gaze. One student turned to me and exclaimed in exasperation, “I don’t know what to do with this guy.” Guy. The most vivid example was a student who watched as her classmate stammered and blushed his way through the interaction. Without another word, she turned around and walked away. Game, set, match, Mr. Smith.
Inside The “Mind” of Mr. Smith
What was happening here? To try and answer that question, I delved into the scientific literature and found a 2019 paper titled: Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence. The authors made a discovery which seemed to explain, at least empirically, the reaction of our students. They demonstrated that people do indeed experience a range of emotions when interacting with AI - and they do so because they perceive a type of mind at work.
According to the study, there are two kinds of “mind”: agentic and experiential. Agentic mind relates to the higher functions of the human brain: reasoning, planning, and goal pursuit. Experiential mind relates to expressing emotion. As an example, babies and animals have high experiential mind but low agentic mind. AI is generally perceived as having moderate agentic mind but low experiential mind. That is to say, we perceive that artificial intelligence can use logic and reason, but we don’t think it can actually express feelings or elicit a strong emotional reaction in human beings.
Have you ever gotten upset with Google because it wouldn’t give you the exact right output that matched your search criteria? I know I have. (Thanks for pointing me to the 1983 Monty Python movie, but I really would like to know the meaning of life.) We perceive Google’s AI-powered engine as capable of logical reasoning, so we blame it when that logic goes awry. But would we say Google is unethical? That it’s outright lying to us to suit its own nefarious ends? We do not because we do not see an emotionally based experiential mind in Google’s algorithm.
In this case, however, our pharmacy students did attribute experiential mind to Mr. Smith. Why was that? The research points to a couple of factors. First, Mr. Smith is represented as an avatar; he looks like the typical middle-aged patient that students frequently encounter in a pharmacy setting. You can’t mistake him for an actual human, but he has aspects of anthropomorphic design that are vaguely lifelike: a human-looking head, eyes, mouth, and body. And, of course, his voice: Mr. Smith sounds very much like a real person to our ears. When AI takes on human beings’ physical appearance, actions, and attributes, we inevitably begin to perceive it as having some level of experiential mind.
Several other factors enhance and reinforce this effect. When AI is involved in a social interaction, we might default to emotions we would normally display in similar real-life situations. In our example, our students tapped into the emotions they would experience if faced with an angry customer in the flesh; they became apologetic, flustered, and lost their train of thought.
Second, when AI encroaches on areas that didn’t exist before, it can cause a range of reactions in the user. Our use of AI in counseling scenarios was unique and unprecedented. Students, therefore, reacted (at least initially) with curiosity, trepidation, and unease. According to the study, the most prevalent emotion experienced under these types of distinctive circumstances is surprise, followed by amazement and amusement. We definitely saw the first two in our students, but you can replace “amusement” with “discombobulation.”
Interestingly, while the simulation proved challenging to navigate, students reacted with overwhelming positivity upon completion. Every student we spoke to afterward expressed delight and wonder at the realistic feel of the technology. When AI produces an outcome that we perceive as extraordinary, it reinforces our feeling of experiential mind.
Third, because mind perception is related to morality, personal interactions with AI might involve moral emotions such as shame or embarrassment. If something makes you feel uneasy or even humiliated, you’re more likely to imbue it with emotional agency. After all, there’s no way mere computer code has the power to make me feel embarrassed and flustered. That would make me a feeble-minded idiot! So our subconscious comes to the rescue, wrapping us in the psychologically comforting cocoon of a protective lie.
Perhaps it is true that “human beings define their reality through misery and suffering,” as The Matrix’s Mr. Smith succinctly noted. Or maybe that’s just true of fourth-year pharmacy students.
ChatGPT Will See You Now
Our ChatGPT-powered avatar was able to elicit and (on some level) dictate the emotions of our users, but that’s only half the story. To demonstrate true EQ, ChatGPT also had to display a certain level of empathy. How can that be possible? The answer was subtle but, upon reflection, completely obvious. In each case and throughout every simulation, ChatGPT was providing our pharmacy students with a large dose of emotional and mental health support. Yes, you read that correctly.
What is empathy? Broadly speaking, it’s the ability to understand another person’s thoughts and feelings in a situation from their point of view rather than your own. What’s important to note is that ChatGPT doesn’t have a point of view. It won’t judge you, berate you, or sarcastically point out all your foibles (well, it might if we program it to do so, but its heart wouldn’t be in it.) By its nature, ChatGPT became a compassionate and nonjudgmental companion as our users went through the assessment.
As part of our protocol, we deliver a variety of metrics at the end of each assessment. This gives the user a detailed and wholly objective readout on how they did. Audirie analyzes more than a dozen speaker delivery metrics — pace, pitch, tone, grade level of speech, etc. — and over 30 behavioral measures. Did you sound Friendly? Energetic? Compassionate? Assertive? At times, due to the newness of the experience, students performed worse than they expected. What’s the most empathetic way to deliver this type of difficult feedback? I would argue it’s with AI — and that it’s not even close.
Remember back to a time you performed for a friend. You could have been telling a story, acting something out, or playing an instrument. And you were bad — hideously so. If your friend gave it to you straight, no matter how gently they broke the news, the result would have undoubtedly caused you some level of emotional distress. Humans inherently know this to be true, so we tend to avoid delivering unvarnished feedback to people we like. Sometimes deception is a necessary defense mechanism to save friendships and bolster self-esteem. My bet is that your friend looked you square in the eye and called an apple the Bolivian Navy.
Our pharmacy students had no such reaction to the AI-generated feedback we supplied them. A strange and wonderful dichotomy had occurred. We had hit upon a magical alchemy by eliciting emotional responses during the simulation and then generating feedback in a logical and dispassionate manner. We could make the simulations more realistic and at the same time provide feedback that was well-received and more likely to be acted upon. Concrete, actionable steps that arm the user with the tools to become more confident and capable — isn’t that the textbook definition of successful mental health therapy?
By validating the emotions of individuals as they went through our simulation and providing accurate, actionable steps toward improvement with total honesty and transparency, we succeeded in enhancing the mental well-being of our students. ChatGPT successfully demonstrated what by all accounts could be referred to as empathy. In the end, our students felt heard and supported.
The Emotional Journey Ahead
Audirie will continue to find new and innovative ways to leverage EQ in forthcoming ChatGPT-powered simulations. And while imbuing ChatGPT with emotional intelligence holds great promise, we recognize the need for balance. There are important ethical considerations to ponder. For example, how can Audirie and other AI-powered platforms ensure bad actors don’t exploit the emotional vulnerabilities of users? My company remains committed to responsible AI development, addressing ethical concerns, and creating a future where emotional intelligence plays a significant role in human-AI collaboration.
Our journey toward giving ChatGPT emotional intelligence has been an exciting and challenging one. By understanding and responding to human emotions, ChatGPT will transform how it interacts with users: forging deeper connections, providing better support, and contributing to improved mental well-being. To echo what Lister tells Kryten after successfully teaching him to lie, cheat, and be completely and totally unpleasant: “I think this is the start of a beautiful friendship.”
References
1. Shank, D.B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence. Computers in Human Behavior 98 (2019) 256–266.
2. Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science,
315(5812), 619.
3. Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception
and the uncanny valley. Cognition, 125(1), 125–130.
4. Klein, J., Moon, Y., & Picard, R. W. (2002). This computer responds to user frustration:
Theory, design, and results. Interacting with Computers, 14(2), 119–140.
5. Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J.-H., & Cacioppo, J. T.
(2010b). Making sense by making sentient: Effectance motivation increases anthropomorphism. Journal of Personality and Social Psychology, 99(3), 410–435.
6. Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of
anthropomorphism. Psychological Review, 114(4), 864–886.
7. Malle, B. F., Scheutz, M., & Austerweil, J. L. (2017). Networks of social and moral norms
in human and robot agents. A world with robots (pp. 3–17). Springer.
8. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality.
Psychological Inquiry, 23(2), 101–124.
Comments