Robots, Communion, and a Thought Experiment
At an Evangelical church I used to attend there was an eccentric older single woman. She always sat in the front row in the service and was always easy to spot. She would come into the worship center in a motorized wheelchair with her little lap dog in the front of the cart. The dog never made much of a fuss and the woman did not seem to be looking for attention either. However, a fuss was certainly made the day that people watched her give her dog the elements of the Lord’s supper.
The reading this week of Eve Poole, in Robot Souls makes me want to do a little thought experiment with all of you. How will we respond when the day comes that robots begin to come into our churches? How will we respond when a robot desires to have communion alongside our congregations? I am sure that the church of my youth that went into a bit of an uproar over a dog lapping up the symbolic grape juice will not stand for a robot taking communion. But are there denominations that might take a more liberal stance?
Poole highlights how AI has been rapidly growing as it copies the left side part of our brain, but it has been much slower to conquer the right side.[1] It is the right side of our brain that is more abstract, theoretical, and unencumbered by the rules of the left. Our right brains indeed are what make us unique and uniquely human. Poole describes some of these unique features as “junk code.” She argues that it is our junk code that programmers need to incorporate into the development of AI.
Is this junk code simply human? Would we trust AI if it made mistakes as it learned? Poole states, “This analysis of humanity’s Junk Code collectively points towards our overwhelming need to matter, both to ourselves and to others. Our coding drives us into community.”[2] If the junk code is what drives us into community, then would junk code incorporated into AI do the same? Would AI ever need community? If it developed a need for community through its’ junk code, then it could be plausible that AI could develop some sort of morality through that community? If AI had human-like morality within a community setting might it begin to seek out spiritual guidance? If robots can’t find the answers to life’s questions of morality and ethics, could they begin to seek it out in a church setting?
In a book written on AI ethics, the author writes, “AI ethics is not necessarily about banning things; We also need a positive ethics: to develop a vision of the good life and the good society.”[3] Could this good life incorporate community? One would assume so. Is the good life found in communistic or atheistic settings? Many who have endured these cultures seem to argue this is not the ideal of the good life. Throughout the world, immigrants seek out the good life found in liberal democracies where there is a plurality of viewpoints and personal freedoms. In last week’s reading, Patrick Deneen argues that liberal democracies have done too well and have gone too far which is why they have now failed.[4] Do morality-based robots represent the extreme end of liberal democracies and highlight the immediate death of it? Or does it begin to show a new era in which are paradigms will be shifted?
One fascinating point Poole brings up is that corporations are already being used as examples of non-human entities that can be held accountable within the legal system paving a way for an AI to also be held to legal standards. This certainly brings up many questions and how we philosophically might engage AI. “Some say that machines can never be moral agents at all, machines they argue, do not have the required capacities for moral agencies, such as mental states, emotions, or free will… The other side of the spectrum are those who think that machines can be full moral agents in the same way that humans are.”[5]
Circling back to the Lord’s table. I realize that this discussion has not had room to dive into the theological understanding and perspectives of communion, but instead has focused on the morality and assumptions of the AI. However, if and when, robots develop an ethical framework, or some morality how will we respond? Poole argues, “The AI would need to be able to adjust its own ethical framework to fit the worldview it chose…If we really have a soul, affording dignity to our partners in creation is the human thing to do, because it is also about who we are too.”[6] Likewise, in AI Ethics, the author argues, “Some argue that mistreating an AI is wrong not because any harm is done to the AI, but because our moral character is damaged if we do so.”[7]
Would we be damaging our own moral character, or affronting our own dignity if we were to fail to allow AI to take communion right alongside us?
As I have walked down this thought experiment, I realize the variety of philosophical and theological rabbit holes we might walk down. For instance, can a robot sin? What is the essence of sin? Does a robot have a place in the Lord’s kingdom? God is the one who has equipped our minds and abilities. Are there robots in heaven? How then might we explain the cherubim like creatures who can hover and fly in all directions?
_____________________________________________________________________
[1] Eve Poole, Robot Souls: Programming in Humanity, 1st edition (London, UK: CRC Press, 2023), 42.
[2] Poole, 99.
[3] Mark Coeckelbergh, AI Ethics, The MIT Press Essential Knowledge Series (Cambridge (Mass.): The MIT press, 2020), 175.
[4] Patrick J. Deneen, Why Liberalism Failed, Politics and Culture (New Haven (Conn.): Yale university press, 2018).
[5] Coeckelbergh, 51.
[6] Poole, Robot Souls, 124.
[7] Coeckelbergh, AI Ethics, 57.
Leave a Reply
You must be logged in to post a comment.