{"id":34714,"date":"2024-01-15T09:38:16","date_gmt":"2024-01-15T17:38:16","guid":{"rendered":"https:\/\/blogs.georgefox.edu\/dlgp\/?p=34714"},"modified":"2024-01-15T23:16:49","modified_gmt":"2024-01-16T07:16:49","slug":"all-good-robots-go-to-heaven","status":"publish","type":"post","link":"https:\/\/blogs.georgefox.edu\/dlgp\/all-good-robots-go-to-heaven\/","title":{"rendered":"The Wisdom of Artificial Intelligence"},"content":{"rendered":"\r\n<p>The title says it all. Robot Souls. In her latest book, Eve Poole explores questions like:<br \/><br \/>What would it take for robots to have souls? In order to answer that question, we have to define what a soul really is, which she discusses at length. [1]\u00a0<br \/><br \/>Then the next question is, would it even be possible to program souls into our artificial intelligence? <br \/><br \/>Finally, would robots endowed with souls actually be our desired outcome or not? <br \/><br \/>I have to give Poole immense credit; she thinks through these questions and their implications so well while I feel like I can barely wrap my head around them. She seems to approach the whole subject from a wonderfully refreshing perspective. The \u201cbetter question\u201d she says, \u201cis to ask what kind of humans we want to be, in relation to AI.\u201d [2] In other words, how do our choices regarding the development and use of artificial intelligence affect our humanness? <br \/><br \/>Much of Poole\u2019s discussion boils down to a question of knowledge vs. wisdom. Obviously, AI can be programmed with all the knowledge available to humankind. Even more impressively, AI can sift through and recall all that knowledge in mere seconds. What AI lacks is an effective sense of how to make a wise or moral decision. Here a definition of wisdom might be helpful. Tim Keller writes, \u201cWisdom can be defined as: competence with regard to the complex realities of life. It has to do with understanding a particular situation and then knowing the right thing to do.\u201d [3] This aligns with an Aristotelian perspective on \u201cphronesis, or \u201cpractical wisdom,\u201d which enables its possessor to deliberate well \u201cabout what is good and beneficial,\u201d and thereby enables one to see \u201cwhat conduces to living well as a whole.\u201d [4]<br \/><br \/>All this begs the question, can AI be programmed to make wise moral decisions? As Poole says, moral thinking is \u201cbest done by humans because of the complexity of their thinking\u201d however in the same thought she cautions that humans are naturally limited by their \u201ctendency towards bias and error.\u201d [5] In response to this question, I did a quick google search to see what progress is currently being made in this area and this led me down a fascinating rabbit trail. I got as far as MIT\u2019s Moral Machine [6] which stopped me in my tracks. As best as I understand, this project is designed to collect data on the moral judgement calls that humans would make in order to program AI in moral reasoning. It presents participants with a moral problem (all variations on the classic Trolley Problem [7] , at least as far into it as I got) and it tracks real humans\u2019 moral decisions. Mostly it\u2019s asking the question, \u201cWhich is the lesser of two evils?\u201d <br \/><br \/><a href=\"https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-34754 aligncenter\" src=\"https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2-300x218.jpg\" alt=\"\" width=\"300\" height=\"218\" srcset=\"https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2-300x218.jpg 300w, https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2-1024x745.jpg 1024w, https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2-768x559.jpg 768w, https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2-150x109.jpg 150w, https:\/\/blogs.georgefox.edu\/dlgp\/wp-content\/uploads\/2024\/01\/Dilemma_Reluctant_Car_2.jpg 1028w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><br \/>This certainly seems like a woefully incomplete moral metric, but perhaps it\u2019s a decent baby-step of a start? (Question mark intentional. I have my doubts.) We seem to be missing a piece or two of the puzzle. I am reminded of the verses in Isaiah 55:8-9 (ESV):<br \/><br \/>For my thoughts are not your thoughts,<br \/>neither are your ways my ways, declares the LORD.<br \/>For as the heavens are higher than the earth,<br \/>so are my ways higher than your ways<br \/>and my thoughts than your thoughts. <br \/><br \/>True wisdom requires much more than a series of ethical dilemmas where we determine the lesser of two evils, not to mention where the moral decision is determined by a majority vote! We need true wisdom from God, the source of all knowledge and wisdom. James 3:17-18 (ESV) describes this for us: <br \/><br \/>But the wisdom from above is first pure, then peaceable, gentle, open to reason, full of mercy and good fruits, impartial and sincere. And a harvest of righteousness is sown in peace by those who make peace.<br \/><br \/>Wisdom from above is certainly our best starting point. Beyond that, I admit all I\u2019ve done in this post is raise more questions than answers. But isn\u2019t that reflective of where we currently are with AI? The reality is that we know monumental ethical dilemmas are on the horizon, even more than we\u2019ve already encountered. May we tread carefully and seek wisdom from above. <br \/><br \/>__________________________________________<br \/>1 Eve Poole, Robot Souls: Programming in Humanity. (Abington: CRC Press, 2024) Chapter 6.<\/p>\r\n<p>2 Ibid., Kindle location 553.\u00a0<\/p>\r\n<p>3 Tim Keller, <em>Gospel in Life<\/em>. <a href=\"https:\/\/podcast.gospelinlife.com\/e\/training-in-wisdom-1635521853\/\">https:\/\/podcast.gospelinlife.com\/e\/training-in-wisdom-1635521853\/<\/a>. Accessed November 22, 2023.\u00a0<\/p>\r\n<p>4 University of Chicago Center for Practical Wisdom. <a href=\"https:\/\/wisdomcenter.uchicago.edu\/news\/discussions\/wisdom-and-tradition-aristotle\">https:\/\/wisdomcenter.uchicago.edu\/news\/discussions\/wisdom-and-tradition-aristotle<\/a>. Accessed November 22, 2023.<\/p>\r\n<p>5 Eve Poole, Robot Souls: Programming in Humanity. (Abington: CRC Press, 2024) Location 1635.<\/p>\r\n<p>6 Moral Machine. <a href=\"https:\/\/www.moralmachine.net\/\">https:\/\/www.moralmachine.net\/<\/a>. Accessed November 22, 2023.<\/p>\r\n<p>7 Wikipedia. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Trolley_problem\">https:\/\/en.wikipedia.org\/wiki\/Trolley_problem<\/a>. Accessed November 22, 2023.\u00a0<\/p>\r\n<p>&nbsp;<\/p>\r\n<p>&nbsp;<\/p>\r\n","protected":false},"excerpt":{"rendered":"<p>The title says it all. Robot Souls. In her latest book, Eve Poole explores questions like: What would it take for robots to have souls? In order to answer that question, we have to define what a soul really is, which she discusses at length. [1]\u00a0 Then the next question is, would it even be [&hellip;]<\/p>\n","protected":false},"author":186,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[2310],"tags":[2489,2090],"class_list":["post-34714","post","type-post","status-publish","format-standard","hentry","category-doctor-of-leadership-3","tag-dlgp02","tag-poole","cohort-dlgp02"],"acf":[],"_links":{"self":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/34714","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/users\/186"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/comments?post=34714"}],"version-history":[{"count":3,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/34714\/revisions"}],"predecessor-version":[{"id":34780,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/34714\/revisions\/34780"}],"wp:attachment":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/media?parent=34714"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/categories?post=34714"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/tags?post=34714"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}