Pondering the Inevitable
In class Dr. Clark made mention that Robot Souls was an easy book that wouldn’t be difficult to read. Those may not have been the exact words but that is how I heard this preamble. My experience of this book was anything but easy. Part computer science and part philosophy Robot Souls may read easily but the concepts and possible ramifications of AI could have lasting ramifications. In this post I will discuss some of the authors views on how humans are different from a robot. However, space and time constraints of this post will limit the ability to delineate all the valuable nuggets of this author. I will also consider why Poole thought it important for robots to be programmed thoughtfully and well, and finally I will share my perspective on the topic including questions that arose from my reading.
I approached this book with a positive bias toward Dr. Poole from having read her previous book, Leadersmithing. I also started with the appreciation that innovation has been an important part of the history of the world. Specifically, the advent of the computer has made research, reading, writing, and even leisure and work activities more accessible and easier. I am far from a reactionary yet; I do feel a bit of caution when I think of AI; the lack of privacy that it could create and how it might be regulated.
What makes humans different?
Qualia: the properties of the mental state of a person. They are
- Ineffable – it is only experienced by direct experience
- Intrinsic – something felt independently
- Private – something that only belongs to you, even if others share a similar activity, how you experience it is singular to you.
- Immediately apprehensible – you experience something in real time[1]
Junk Code: typically, in computer programming, junk code are the redundant things that are left out[2]. In AI it takes on a somewhat different meaning.
- Messy Emotions
- Human Mistakes
- Free Will
- Giving meaning to the world around us[3]
These are things that Poole says make us human. For instance, mistakes can lead to innovation, invention, and discovery[4]. But moral errors are of greater value according to Poole. When recognized, they give the person a sense of regret after realizing the transgression. Ultimately, it pushes a person to improve their behavior[5]. Poole seemed to say the learning that takes place is important.
One area that Poole mentioned was the possibility of robots gaining rights, such as those of humans. I bulked at this concept[6]. Yet, Dr. Poole approached the concept of AI in a matter-of-fact way and methodically discussed the value of AI, the pieces that make human different, and the cautions needed. After pondering it for a while, I would agree with Robin Gill, who reviewed the book, that “it is a plea for better knowledge and creative governance”[7] .
When I let my curious mind run amok, I think about how the cigarette companies responded in the 1950s when the negative effects of smoking became known. They flooded the market with manufactured doubt[8]. I don’t know that AI creators are doing that but wonder if we are only hearing the benefits of Alexa, or whatever device is, currently, used most in homes so that people come to experience AI in their homes as natural.
I think regardless if Eve Poole is right or wrong in her assessment about where AI could be headed and the possible ramifications of having limited involvement and oversight in its creation, she had the temerity to address the subject matter with thoughtful, educated concepts.
Ultimately, Dr. Poole stated that if AI will progress, is no longer a question for her. She believes that it will happen, and it is incumbent on humans to do it well before it is too late[9]. One of her solutions was for humans to approach AI from a parental point of view. Just as parents must teach their children and care for them. Humans are the parents of AI. One of the ways to parent well in her mind is to figure out how to program the junk code that humans have innately, into AI[10]. It would be a valuable step to making AI appear more human and have more coding to make decisions that allow for uncertainty and mistakes. Listening to Dr. Poole speak, I sensed someone who really is trying to work out the conundrum of what AI could mean for the future.
Finally, after reading the book, a review of the book, and then listening to Dr. Poole in a YouTube interview I am left with more questions than strong opinions.
Some of my questions include:
Is the development of AI ultimately a human quest to become God?
Who decides what characteristics are valuable enough to be programmed into AI? What happens to everything else?
What could be some of the unintended consequences of AI?
Will the value in diversity be lost?
Isn’t the purpose of technology to make life easier for humans?
How do we trust that AI will not be programmed for evil?
Can creativity be programmed or will innovation cease?
Will humans brains become mushy messes because we won’t need to think because AI could do it for us?
On final reflection, my questions seem cautionary. Maybe that’s where I am.
[1] Eve Poole, Robot Souls (New York, CRC Press, 2024),44.
[2] Eve Poole, 74.
[3] Even Poole, 74.
[4] Eve Poole, 77.
[5] Eve Poole, 78.
[6] Eve Poole, Robot Souls (New York, CRC Press, 2024),27n38.
[7] Robin Gills, Robot Souls: “Programming in humanity by Eve Poole”, Church Times (New York, CRC Press, September 1, 2023), 2. https:www.churchtimes.co.uk/articles/2023/1-september/books-arts/book-review/book-review-robot-souls-programming-in-humanity-by-eve-poole.
[8] Tim Hartford, How To Make The World Add Up (London, The Bridge Street Press, 2020), 14.
[9] Douglas Giles, interviewer, and Eve Poole, AI and Us: Interview with Dr Eve Poole about Her New Book, “Robot Souls”, July 27, 2023, accessed January 22, 2025, 52.10 minutes. https://www.youtube.com/watch?v=c61iCcLcRol.
[10] Eve Poole, 113.
8 responses to “Pondering the Inevitable”
Leave a Reply
You must be logged in to post a comment.
Hi Diane, You asked good questions at the conclusion of your post! I think this is what makes AI so daunting, we don’t have all the answers. Also, thank you for outlining “qualia.” I made sure to put that in my notes. I agree with you. I did not find this book an easy read either. However, reading about AI advancements made me think about how it benefits those with intellectual and developmental differences. Has this impacted your work in any way?
Thanks for reading and the question Elysse. We use a lot of technology in our programs, mostly in the realm of communication devices or i-Pads w/ educational games. Not so much with AI as referenced in the book.
Thanks Diane, I too appreciated your questions at the end of the blog. What, in particular, gave you pause about granting rights to AI?
Hi Graham, Thanks for the read and the question. I think where I ultimately lie at this point is that AI is a machine programmed by humans to respond to stimuli that was programmed into them. In Genesis 2:7 it says that Adam became a living being when God breathed the breathe of life into Adam. I don’t see that in AI – at least not as we know it. So in my limited way, I think we would be giving the creators of the AI units more rights. I don’t think AI is necessarily bad, I just don’t think it is human or alive.
Thank you for this analysis Diane. I’m wondering how you currently use various forms of AI? Do you envision yourself using AI in the future? How? What could go wrong?
Hi Debbie, Other than ChatGPT and maybe a few other small type things, I don’t use it a lot. I intentionally have not put an Alexa in my home. It felt a little like “big brother” watching us in our personal space. This concept came from having Alexa at my daughter’s house start talking about something that was going on at the dinner table, even though Alexa wasn’t invited into the conversation. I still see AI as a machine programmed by humans to sound like humans.
Hi Deane,
Given Dr. Poole’s perspective on approaching AI development from a parental point of view, how might this analogy shape the way we govern and oversee the creation of AI to ensure it aligns with human values and ethics?
Hi Shela, Thanks for the question. I am not sure I have a good answer for it because I really think the government should regulate it just like it does a telephone or internet line. I do not think the AI robot is near to being needed at all times, unless it is programmed to be in a hospital and could give good palliative advice on the floor.