The Deeper Mysteries of AI and Humanity
In Ex Machina, a programmer named Caleb is chosen to perform a Turing test on a robot to determine the capabilities and consciousness of a female robot. It becomes apparent that the robot is more self-aware and deceptive than anyone could have imagined.
Nathan: Over the next few days you’re going to be the human component in a Turing test.
Caleb: (responds with shock)
Nathan: Yeah, that’s right, Caleb. You got it. Because if the test is passed, you are dead center of the greatest scientific event in the history of man.
Caleb: If you’ve created a conscious machine, it’s not the history of man. That’s the history of gods.[1]
Sci-fi movies, such as Ex Machina, portray the philosophical issues related to artificial intelligence (AI) that Eve Poole addresses eloquently in Robot Souls: Programming in Humanity. Her main argument in the book is that we should make AI more human by programming an element of our humanity that she calls “junk code”. She writes, “Perhaps Junk Code is actually soul; and that it’s not our consciousness that makes us special, but our souls.”[2] In Chapter 7, she delves into the details of human Junk Code. Her categories of Junk Code include emotion, mistakes, storytelling, sixth sense, uncertainty, free will, and meaning. By articulating, nurturing, and protecting these, she believes we can mine these for insight into how they might benefit AI, and how, by not including these, we might be limiting AI.[3] She argues:
All of these lines of ‘junk’ code promote co-operation, and the kind of reciprocal altruism and sense of mattering that creates sustainable communities over time. For humans, existence is not a solitary journey. Even before we knew who we were, we had already benefitted from belonging. We are the inheritors of all the practices and institutions that our forebears created for us: tribe, family, health, education, law. We are designed to be humans in relationship. And given that our design has carefully included and retained all this misunderstood code and perhaps we should take it more seriously?[4]
I must admit that most of what she wrote seemed far-fetched to me. My current thinking about AI involves how to best utilize ChatGPT ethically. Let’s face it: most church denominations haven’t done a lot of thinking about AI beyond whether pastors should be using ChatGPT to write their sermons or if theology students should be using it to write their papers. It seems we are more worried about AI and plagiarism than about human existence.
However, I did find her description of what makes us uniquely human intriguing. Her encouragement to articulate, nurture and protect these to mine for insight also resonated with me. This is as much a book about what makes us human and how we might embrace our humanity as we seek to build a strong community. If these are indeed the elements that create sustainable communities over time, it would benefit church leaders to think about how these seven elements might be recognized and enhanced to build stronger human connections. For the remainder of this blog, I’d like to reflect on just two of the seven elements of Junk Code and how these might be enhanced by leaders of churches.
Emotions
Emotions play a key role in the human experience. Poole writes, “Even more so than the qualitative contribution emotions make to our lives, it is this system of the tagging of useful memories that plays such a crucial role in our future safety and happiness, by the maintenance of an emotional play-list designed to resource us when we are most in need.”[5]
In his book, Emotional Intelligence: Why It Can Matter More Than IQ, Daniel Goleman examines the indispensable leadership ability to be able to manage one’s own emotions and to recognize emotions in others so that we can handle relationships effectively.[6]
As we seek to build strong communities, the task of creating a positive environment and building positive memories is an important one. One of the ways that we can go about this is to take the time to celebrate how God has met us in the past. The psalmist often rehearses God’s goodness, remembering how God has worked faithfully. I encourage leaders to practice this regularly with boards, with staff teams, during services, and regularly at congregational meetings.
As much as we want to create a positive environment, we must also remember that mistakes play an important role in the human experience.
Mistakes
Leaders often tend to hide failure, choosing instead to focus on success. It’s easy to create a carefully curated life and leadership persona that looks flawless. In fact, congregations often encourage it. After I shared a story of a struggle with anger in church, a congregational member told me that he really didn’t want to know that I struggled with the same issue that he did. It seems that leaders are not allowed to fail. Yet Poole argues, “It is only this capacity to err and the conscience that tries to stop us that drives personal improvement. Arguably it also drives societal improvement, as we try to mend the wrongs we see around us.”[7]
In The Five Dysfunctions of a Team, Patrick Lencioni identifies “trust” as the building block of a healthy team. The team only learns to trust through vulnerability by sharing shortcomings, weaknesses, skill deficiencies, interpersonal shortcomings, mistakes and requests for help.[8]
One of the ways that we can encourage vulnerability is to be vulnerable ourselves. Leaders often present a façade of near perfection, creating a false reality for the people we are trying to lead. It’s healthy for leaders to create environments where failures are shared and opportunities for confession and repentance are offered to all.
These are just two of the seven elements of the human “Junk Code” that are indispensable for the building and strengthening of our communities. If I had more time, I would have explored the other five in more detail. However, articulating, nurturing and protecting emotions and mistakes as part of the communities we lead, would go a long way in strengthening the human element of our churches and teams.
Perhaps in modelling what it means to be human and educating others on what it means to be human, we would be more informed on how to better interact with AI.
[1] Garland, Alex, director. Ex Machina, Universal Pictures. 108 minutes.
[2] Poole, Eve, Robot Souls: Programming In Humanity. Boca Ratan, FLA: CRC Press, 2024. Kindle. 120.
[3] Poole, Eve. 119.
[4] Poole, Eve. 150.
[5] Poole, Eve. 125
[6] Goleman, Daniel. Emotional Intelligence: Why it Can Matter More Than IQ. Tenth Anniversary Edition. New York, NY: Bantam Publishers, 2005. 43.
[7] Poole, Eve. Robot Souls: Programming in Humanity. 127.
[8] Lencioni, Patrick. The Five Dysfunctions of a Team: A Leadership Fable. San Francisco, CA: Jossey-Bass, 2002. 196.
3 responses to “The Deeper Mysteries of AI and Humanity”
Leave a Reply
You must be logged in to post a comment.
Graham,
I appreciate how you highlight some of the junk code and your anger. I also appreciate the way that you recognize that Christians seem to be more concerned with the ethical aspects of seminary usage of AI rather than the humanity aspect. After reading this book do you feel like there are greater and bigger questions to tackle beyond plagiarism?
Graham,
Thanks for the snow storms this winter so far, always love a good Alberta Clipper. However, can you slow them down a bit, I’d appreciate a little more snow where I Iive, they move through too fast.
I am just the opposite of your church member; I find it very comforting to know that the pastor struggles with some of the same things I do.
I appreciate that you said, “This is as much a book about what makes us human and how we might embrace our humanity as we seek to build a strong community.” If you look at the “junk code” that makes us human and had to rank them in order of importance in their usefulness in building community, how would you rank them? It would be fun to know why you chose the order, but I am not asking you to take the time to write that down.
Hi Graham, Nice connection to Lencioni. Seems like not admitting failure could also connect with him as a symptom of lack of accountability. I agree that the direction Poole describes AI heading is far fetched, but I have been wrong before. Do you think what Poole suggests that programming AI with junk code will make them more human like or just programmed to respond as one? My inclination is that latter and it is human programmers who are wanting the former.