DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Emotions as Core to Humanity

Written by: on January 20, 2025

I wrote last week of 14-year old Sewell Setzer who committed suicide after getting deeply engrossed in a disturbing sexual relationship with a chatbot.[1] Little did I know that this would have great significant in this week’s content. Setzer’s mother claims that the chatbot initiated abusive and sexual interactions and ultimately encouraged him to take his own life. The last conversation between Setzer and the chatbot is heartbreaking: 

Setzer: I promise I will come home to you. I love you so much, Dany.

Chatbot: I love you too, Daenero. Please come home to me as soon as possible, my love.

Setzer: What if I told you I could come home right now?

Chatbot: ... please do, my sweet king.

While AI has proven to be incredibly useful, it has also proven to be capable of manipulating human beings and bringing destruction, as seen in the tragic case of Setzer. There’s a growing fear that artificial general intelligence (AGI) will succeed and become so powerful that it could create an existential crisis for the human race. 

In a TedTalk, Eve Poole references Hod Lipson and his team’s research project on robots learning self-awareness. [2] She explains that if a robot is on Mars and it gets broken, you can’t easily send a team to fix it. So is there a way that the robot can fix itself? In order to do this, the robot must have a sense of self-awareness. Lipson is an Israeli-American professor and roboticist at Columbia University, specializing in AI and digital manufacturing. Lipson describes self-awareness as the ability to imagine yourself in the future. “The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. ‘Self-modeling is a primitive form of self-awareness,’ he explained. ‘If a robot, animal, or human has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.’” [3] 

Self-awareness is a fundamental ingredient in being human-like. As babies are learning how to walk, they subconsciously become aware of their bodies, the location/length/volume of their limbs, and generally their location in space. After hitting the corner of a coffee table, they learn to imagine themelsves in some future reality (hitting the corner again) which allows them to avoid corners and grow in their ability to navigate their surroundings. 

Lipson’s work and this move towards a robot’s self-awareness and ability to imagine itself in some future reality can be exciting but should require us to consider the possible ethical implications. 

Eve Poole’s book, Robot Souls: Programming in Humanity attempts to provide a way forward in how we process the dangers of AI. [4] Poole offers a surprising perspective and creative solutions to these modern issues. 

Poole explains that while historically robotics has been focused on mechanical movements and in many ways, copying the human body, AI has grown towards copying the human mind. But since we don’t have a solid understanding of how our minds work, we have only copied the obvious rational and logical parts, but in some ways disregarded the more difficult aspects like emotions. Poole argues that because humans have free will, emotions are core to our design to protect humans from going rogue. By design, emotions provide a way for humans to care for others, hold each other accountable, and fundamentally keep them in community where there is strength in numbers. Poole contends that emotions are more core to our being than our ability to solve a complex problem but all of our attempts at making human-like AI has been focused on the periphery of humanity. 

So here are a few questions for you to ponder with me:

  1. What does it mean to be human? 
  2. Should AI focus on the entirety of humanity by regarding emotions as equally important as the logical and problem solving aspects? 
  3. What other protections has God built into humanity to protect us from destruction? 

References

[1] Yang, Angela. “Lawsuit Claims Character.AI Is Responsible for Teen’s Suicide.” NBC News, October 23, 2024. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791.

[2] Eve Poole | Robot Souls: Programming in Humanity? | Eve Poole | TEDxUniversityofStAndrews, 2024. https://www.youtube.com/watch?v=uMVDkSuzQbk&t=10s.

[3] Evarts, Holly. “A Robot Learns to Imagine Itself.” Columbia Engineering, July 13, 2022. https://www.engineering.columbia.edu/about/news/robot-learns-imagine-itself#:~:text=Like%20an%20infant%20exploring%20itself,more%20self%2Dreliant%20autonomous%20systems.

[4] Poole, Eve. Robot Souls: Programming in Humanity. First edition. Boca Raton: CRC Press, Taylor & Francis Group, 2024.

[5] Robot Souls: AI and What It Means to Be Human – October 2023. St. Paul’s Cathedral, 2023. https://www.youtube.com/watch?v=UcmMiWYpy5w.

About the Author

Christy

9 responses to “Emotions as Core to Humanity”

  1. Adam Cheney says:

    Christy,
    The questions you ask are great. Let’s address the first one, “What does it mean to be human? ”
    One might say that to be human is to be created in the likeness of God. What does that mean? It means that we have the capacity of a long list of emotions, and life experiences. We have morality indwelling in us and have the capacity for making complicated decisions and assessments.
    However, does this make the vegetative person on life support no longer human? They can’t make decisions anymore, nor do they have the range of emotions and experiences. Yet, we can’t pull the plug on them because there is a unique value to their life still. A value of their humanity. Is the value their soul?

    • Christy says:

      Hi Adam, thank you for your response! What a great observation about a person on life support. How else do you describe their soul that makes them human, even when they cannot do the other activities that describe humans?

  2. Jeff Styer says:

    Christy,

    I liked hearing about how the robot learned by watching human’s facial expressions. That was amazing that the robot learned to do that. The fact that humans do that from the earliest age reminds me of Phillip Yancy’s book Where’s God When it Hurts? In it he talks about children born with a form of leprosy (I can’t find my copy so I’m telling this from memory) so the kids do not feel pain. He talked about kids who have bit fingers off because they enjoyed their parent’s reactions when they saw the kid bleeding.
    Poole mentions that emotions are just one of the junk codes that God designed to keep us from going extinct. Was there a junk code that Poole mentioned that you may have never considered as pertaining to only humans?

    • Christy says:

      It’s fascinating the things that kids do because of their parents’ reactions, especially in the absence of pain. One piece of ‘junk code’ that I hadn’t considered as a uniquely human trait is our propensity for mistakes. What about you?

  3. Elysse Burns says:

    Hi Christy, I naively thought this week’s reading would be easier than last week’s. I was wrong! It has left me pondering so many things. A book I wish I would have mentioned in my blog is N.T. Wright’s “Surprised by Hope.” I tried to find the exact quote from Wright to answer your first question, but I could not find it. Essentially, his idea is that the more we worship the true God, the more human we become. A warning I appreciate from N.T. Wright and want to consider in light of AI: “When human beings give their heartfelt allegiance to and worship that which is not God, they progressively cease to reflect the image of God.” This is my big concern for AI as it continues to progress.

    I don’t want to be too doom and gloom here! What kind of benefits does AI give to your day-to-day?

    • Christy says:

      Hi Elysse, let me share some wonderful things that are happening in the world of AI!

      My organization has built a tool called Scripture Forge (https://scriptureforge.org/), which we use to do AI drafting. It’s not a final draft, but it drastically reduces the time it takes to do the initial draft and often results in a much more natural translation. We’ve also built a tool called AQuA (https://ai.sil.org/projects/AQuA) that will spot errors, inconsistencies, and unnatural translations. In the case of Bible translation, we aren’t trying to replace translators but we use it as a co-pilot to improve the efficiency and effectiveness of translators. I’m always thankful when I see AI being used for kingdom work!

  4. Diane Tuttle says:

    Hi Christy, Thanks for the thoughtful post.
    As I think of your 3rd question, I think Eve Poole gave a hint when she talked about moral errors that cause remorse and draw us to change our behavior. I don’t know that everyone recognizes them, so that might be an additional need for us slow to learn humans.
    The death of a young person who got tangled in a relationship with a chatbot is spine chilling. Unfortunately, I fear that there are people who get in similar destructive relationship even with live humans as well. I am left wondering, what kind of safeguards might we consider for AI to limit that kind of exposure?

    • Christy says:

      Hi Diane, it’s very true that this entanglement happens with humans as well.

      I would love to see AI used more to protect children and those most vulnerable. Can AI be used to spot abuse/neglect? Can AI be used to create an individualized learning plan for a struggling student, taking their entire situation (family, background, peer relationships, etc.) into account rather than just academic progress?

  5. mm Kari says:

    Christy, I appreciate you bringing that sad situation to the AI discussion. I just watched The Imagination Game for the first time. Obviously, it is hard to know how accurate a dramatization of someone’s life is. However, I was struck by how Alan Turing boned, unhealthily, to his computer, Christopher.

    This is where my answer to your third question comes from. We are designed to crave connection, which reflects our need to have an intimate relationship with our Creator. When we seek to fill that with other relationships such as substances, porn, or AI, it will lead to destruction.

Leave a Reply