DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Imago Dei vs. Imago Homo Sapien

Written by: on January 15, 2024

The Ultimate Computer was an episode of Star Trek[1] that featured the character Dr. Richard Daystrom, a scientist tasked to upload his powerful “M5 computer” into the Starship Enterprise so it could control the ship for upcoming wargames. This efficient supercomputer quickly turned deadly, first killing a crew member (because it was in the way of a power source), and then turning on other federation ships, killing hundreds because it could not discern the difference between a true threat and a simulated wargame scenario.

The show’s twist was that the M5 was actually imprinted with Dr. Daystrom’s “engrams”. This computer was programmed to be conscious and aware and to have the ability to make judgements… and it had something of a human soul. The problem was that its soul had been copied from a very left brained, possibly psychopathic, scientist.

The M5 was designed with the best intentions. It was intended to replace the dangerous work that people had to do in space, so that men and women would no longer need to be killed in the pursuit of exploration. However, it also had been programmed to “survive, so that man could be protected”. That benevolent algorithm had apparently not been programmed using Asimov’s laws of Robotics[2], which led the computer to kill hundreds of humans to survive, so that it could continue existing to serve the greater good of protecting humankind.

To overcome and switch off the computer Captain Kirk had to rely on his own “junk code” over his logic. While the M5 clearly had superior efficiency, only a human could discern the moral nuance of the situation and even make and learn from happy accidents (thanks Bob Ross[3]) that would end in saving lives.

This Star Trek episode is a great example of mid-century science fiction media expressing caution against rouge Artificial Intelligence gone wrong[4] and it makes a perfect companion to the book Robot Souls by Eve Poole.

In her short, dense, and fascinating book, Poole dives quickly into the deep end. Robot Souls acts as a 30,000-foot flyover and basic primer on diverse A.I, topics including the history of computing, a philosophy of knowing, ethical questions regarding A.I. sentience (“an ability to sense, an ability to perceive, and feel things”[5]) and sapience (“processing or having the ability to possess wisdom”[6]), legal implications of A.I. as persons, and more.

 But the books’ primary argument seems to be that if A.I. is going to be conscious and make judgement decisions, that it must also be programmed to have a soul, because without a soul A.I. could easily act in a psychopathic manner, making decisions based on efficiency alone instead of the kind of things humans value And when a computer can make terrifyingly quick decisions with potentially global implications, that is a sobering argument.

Poole makes the case that soul is more than feeling or thinking or judging; it includes human “junk code”, those parts of us that could easily be streamlined and programmed out for efficiency, but that, she contends is precisely what makes us human. This “junk code shows up in several categories. The most famous is our very messy emotions, closely followed by our unshakeable ability to keep on making mistakes. We are also inclined to tell stories, are attuned to an uncanny Sixth Sense…, and we have an amazing capacity to cope with uncertainty. We also have a persistent sense of our own agency in the shape of free will, and a propensity to see meaning in the world around us.”[7] It’s only when A.I. includes this “inefficient” junk code that qualities such as empathy and compassion can be hoped for, otherwise we may end up with only logical computers making only efficient decisions that don’t consider the thriving of humanity (which some argue is simply a logical and acceptable next step in the evolutionary process).

Up until now, I’ve mostly relegated truly disastrous possibilities regarding A.I. to the realm of science fiction. But after reading this book I find myself asking all kinds of questions I hadn’t asked before. Questions like: “Do we really want computers with souls?” Or “What happens if we program the wrong soul into a computer?” And “Because each human being is infinitely unique and we provide checks and balances to one another, what happens if all A.I. shares the same pathology and takes over the world?”

However, more important for me than philosophical, ontological, or even apocalyptical questions is a theological one. Christians believe humans are created perfect, in the image of a perfect God, but were broken because of sin against God. While our souls are marred, our telos (completion) focuses us back toward God, and wholeness. But what happens when broken humanity creates A.I. in its own image? Is it possible to have a good soul that’s been created in the image of a broken soul? Does something important deteriorate when there is a damaged copy of a damaged copy?

With a nod to my friend Dinka who asked a question in his post last week about “Imago Dei vs. Imago Identity”, I’m wondering this week about the Imago Dei vs. Imago Homo Sapien.

At the end of The Ultimate Computer, Spock declares that “computers are more efficient than human beings, not better.”  Eve Poole suggests we might make them as good, if not better, by giving them souls. I’m not yet convinced that’s a great idea.

[1] Star Trek, episode 24, season 2, aired March 8, 1968

[2] Eve Poole. Robot Souls: Programming in Humanity. Boca Raton, Florida: CRC Press, 17.

[3] https://www.youtube.com/watch?v=_Tq5vXk0wTk

[4] See also 2001 A Space Odyssey, a film released the same year as this Star Trek episode.

[5] Robot Souls, 146.

[6] Ibid.

[7] Robot Souls, 74.

About the Author

mm

Tim Clark

I'm on a lifelong journey of discovering the person God has created me to be and aligning that with the purpose God has created me for. I've been pressing hard after Jesus for 40 years, and I currently serve Him as the lead pastor of vision and voice at The Church On The Way in Los Angeles. I live with my wife and 3 kids in Burbank California.

12 responses to “Imago Dei vs. Imago Homo Sapien”

  1. Travis Vaughn says:

    Tim, you asked, “But what happens when broken humanity creates A.I. in its own image?” I started to think about this as I responded to Kally’s post. Genesis 5 came to mind… when “Adam…fathered a son in his own likeness.” It’s subtle, but not so subtle…the shift from God making humans in his own image to his creation parenting other image bearers — yes, still bearing the image of God — but also bearing the image of broken humanity….eventually worshipping and serving “the creature rather than the Creator.” (Rom 1:25). If you haven’t yet seen Netflix’s Black Mirror, at least the first four seasons, check out an episode or two…what happens when technology — a human creation and meant to be a good thing — becomes an ultimate thing. Indeed, the question around A.I. has to include a theological reflection.

    • mm Tim Clark says:

      Travis, that’s a great point, that “Adam fathered a son in his own likeness”. I hadn’t made that connection and I agree that creation parents other broken image bearers.

      Doesn’t it seem, though, that there is a difference between the act of parenting—the co-creating of a being who while having a sin nature like us, is still ‘fearfully and wonderfully” “knit together” in the womb by God—and intentionally forming a sentient being that “we” knit together in the lab?

      However, I do believe, and have preached, that we are creative beings, created to create, so I’m not sure if I’m being contradictory or maybe just overly cautious, but it seems that trying to instill a soul that we don’t (and probably can’t) even fully understand could be crossing the line into trying to become our own god’s.

      • Travis Vaughn says:

        Tim, you are 100% correct. Yes, there is a chasm between the act of parenting/co-creating another human and the act of creating a sentient “being” in a lab. I should have clarified…the works of our hands, though we are fearfully and wonderfully made, are marred by brokenness. This has to affect what we create (including AI).

  2. Esther Edwards says:

    Tim,
    Love your questions. They make me think of all that could go desperately wrong with humans creating humans. We are created as creative beings, but since soul also denotes life and breath (Genesis 2:6) I venture to think God has the edge on us. We can create desired outcomes but the initial breath of life?
    My friends just had a baby where they adopted an embryo that they could choose from by the baby pictures of the parents and now have a healthy baby girl who looks like them. Truly, scientific advancements have come so far (and with a lot of ethical questions to think through) but some things just seem out of reach in our humanness to create.

    • mm Tim Clark says:

      I agree. I think that if we were to cross that threshold it won’t be just a technical advance but a massive tectonic shift in humanity and history.

      But there’s something in me that wonders if God will let us go so far and not further…. a line that He won’t allow humanity to cross? I’d guess developing and shaping a soul would be that line, but I’m interested to see where the debate on that as well as the progress, goes.

  3. mm Russell Chun says:

    I am a Treki, so thanks for reminding me of that story.

    The crown for me was page 93, the algorithms that identified openness, conscientiousness, extraversion, agreeableness and neuroticism…leading to the Myers Brigg Test and one wonders the Enneagram? So many people sign up for the AI generated algorithms.

    Aristotle’s intellectual virtues episteme (scientific), techne (crafts ) and phronesis (value judgments/wisdom) were new and interesting for me to see that the AI debate reaches back in HISTORY to evaluate today’s AI question.

    I guess my prayer is that the Holy Spirit is the “soul” that is breathed into the shell of AI.

    Shalom…

    • mm Tim Clark says:

      I’m not even a Trekki. 🙂 I was telling friend about the book and he sent me a link to that episode as a connection. It’s fun to take our learnings in this program and test them out “in the wild”.

      I hadn’t thought of the “personality tests” that would be done on AI if they had souls. Wow. So much to consider.

  4. mm Kim Sanford says:

    Thanks for your post, Tim. You helped me understand Poole’s idea of junk code more accurately. Somehow when I was reading the book I got stuck on “junk” meaning bad, as in junk food, but she means it more as inefficient and in a way extraneous. That helps me see her main point more clearly.

  5. mm John Fehlen says:

    You said, “The problem was that its soul had been copied from a very left brained, possibly psychopathic, scientist.”

    This is so similar to something that JENNY DOOLEY said, and I told her it was her “mic drop” moment: “What are the implications for programming AI regarding values, morals, and ethics when we aren’t doing such a great job of defining them for ourselves?”

  6. mm Dinka Utomo says:

    Hi Tim! Thank you for your insightful post. I’m intrigued with it.

    You wrote, “But what happens when broken humanity creates A.I. in its own image? Is it possible to have a good soul that’s been created in the image of a broken soul? Does something important deteriorate when there is a damaged copy of a damaged copy ?”

    Your question seems to represent the thoughts and feelings of millions of other humans out there. I admit, I also thought that way. However, I realized that when AI emerged because of human wisdom and intelligence which also came from God, I thought again that perhaps God really wanted this to happen. In my opinion, we need to do discerning to find the answer. If it comes from God’s will, I believe, everything that comes from HIM, everything is good and has a noble purpose.

Leave a Reply