DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Artificial Intelligence equals a 21 gram weight loss?

Written by: on September 9, 2023

When considering this topic, I immediately could find myself drifting to the apprehensive side of AI.  I think I’ve seen too many movies where this type of intelligence could lead us to a place where the machines have taken over humanity.  This conversation has been going on for a very long time as machines have replaced humans in manufacturing. The smart ones started to educate themselves on mechanical engineering and repairs, as parts wear out, phew.

While listening to Chat GPT-3 and it’s Impact on Education by Michael Webb it was apparent how far down the road we already are in this discussion, and as what seems to be per usual us humans are trying to catch up!  Michael talked about and educational solution to keep students was being able to use technology to detect cheating which leads back to the technology being able to evade detection[1].  Around and around we go, I resonates with how I feel being a parent to teens in this day in age.  They are brilliant and able to intuit how technology works and speak a language that is completely foreign to me.

Lucinda McKnight suggested “Eight ways to engage with AI writers in higher education” and she suggested two ideas that led me down a path of discovering how AI could influence my area of study and it confirmed for the most part, my fears of where AI is going. She suggested to “explore and evaluate the different kinds of AI-based content creators that are appropriate for your discipline.”[2] And “Research and establish the specific affordances of AI-based content generators for your discipline. For example, how might it be useful to be able to produce text in multiple languages, in seconds? Or create text optimized for search engines?”[3]  I was honestly a little afraid of what I would find in my research as I work in healthcare.  I realize that technology and AI type of technology has helped advance our success in treating diseases and injury.  As a Hospice clinician we have faced deep scrutiny (well-deserved thanks to a number of bad characters who were unethical and frauded the government, go here to read more if you are interested False Claims Act.[4]) As part of this accountability, it has become very rigid and the expectation on doctors and hospice teams to predict death has become the focus.  I found a number of web articles where AI has become a go to for hospices to do better “predicting” of hospice eligibility.  My company moved to a different sort of electronic medical record software that is on our phones. It has become a large amount of checking off boxes and answering questions over and over again.  On a positive note, our company has not been more compliant to all the laws and regulations over hospice then we are now.  It gives us accountability and serves as a reminder of the information we need to have as we work with the dying.  However, and this is a large however, it has taken away the gut instinct of a lot of my co-workers and it’s soul crushing.  As I train and encourage my co-workers as one of the administrators I often tell them (Note: Swear word coming) I tell them “make this documentation your bitch”, this swear word gives me street cred as a “ministry” person in a secular work environment.  When I tell them this it’s because all of us have gut instinct, and our own soul and intelligence and they start to lose confidence in themselves. AI can seem to do both help and hinder the process for Hospice.  What if AI in healthcare documentation could detect a “mis-step or mistake” in our healthcare workers.  Mistakes in healthcare can cause death, this is serious stuff and creates a culture of perfection.  I could see, just as Sal Khan states in his TED talk, “ what if we could create personalized tutors for students, and personalized teachers assistants for teacher?”[5]  I could see in live documenting in healthcare or bedside documenting, a healthcare tutor could detect a mistake like a wrong dosing of medication (happens often)or a missed symptom that could be a pathway to a more accurate diagnosis?  I think it could do this in a way, like Khanmigo, where it asks a question back to create critical thinking in the healthcare worker within their lane, and ask the clinician to ask more question rather than just give the answer.  With this in mind I could see how AI could benefit the clinician.  In my NPO, I am hoping give courage and soulfulness back to the healthcare worker as they help patients come to terms with death and dying.  I believe slowly (and quickly) we are losing the soul of healthcare and our healthcare workers are leaving due to no longer feeling connected to the work or experience vicarious trauma.

In the movie 21 Grams it is argued that “the soul” weighs 21 grams as life leaves the body”.[6]  How appropriate I feel this concept is as I encounter what Artificial Intelligence could do to healthcare…take the “life” out of it.  Now I see potential as I listened to Sal Khan, on how it could help, but I cannot help but to sound the alarm that we cannot take the “soul” out of everything we do!  Confidence in ourselves and our acknowledging of our own shortcomings is what weighs 21 grams.  This is a weight loss program I don’t want to fully embrace.

[1]  Michael Webb, ChatGPT and Its Impact on Edcuation, https://drive.google.com/drive/folders/1eMgz1LWSXLOeFrPcAMDf0z5KEwFVhAs7, accessed September 6, 2023.

[2]  “Eight Ways to Engage with AI Writers in Higher Education,” THE Campus Learn, Share, Connect, October 14, 2022, https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education.

[3]  “Eight Ways to Engage with AI Writers in Higher Education,” THE Campus Learn, Share, Connect, October 14, 2022, https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education.

[4]  Jim Parker, Hospices Deploy AI for Swifter Clinical Decisions, improved Compliance, (Hospice News) March 7,2022. https://hospicenews.com/2022/03/04/hospices-deploy-ai-for-swifter-clinical-decisions-improved-compliance/

[5] Sal Khan, “How AI could save (not destroy) education. (TEDtalk) May 1, 2023.  https://youtu.be/hJP5GqnTrNo?si=UfBAYH343hxgCYaK

 

 

[6] 21 Grams, directed by Alejandro G. Iñárritu (New York City:This is that productions: 2003).

About the Author

mm

Jana Dluehosh

Jana serves as a Spiritual Care Supervisor for Signature Hospice in Portland, OR. She chairs the corporate Diversity, Equity, Inclusion and Belonging committee as well as presents and consults with chronically ill patients on addressing Quality of Life versus and alongside Medical treatment. She has trained as a World Religions and Enneagram Spiritual Director through an Anam Cara apprenticeship through the Sacred Art of Living center in Bend, OR. Jana utilizes a Celtic Spirituality approach toward life as a way to find common ground with diverse populations and faith traditions. She has mentored nursing students for several years at the University of Portland in a class called Theological Perspectives on Suffering and Death, and has taught in the Graduate Counseling program at Portland Seminary in the Trauma Certificate program on Grief.

2 responses to “Artificial Intelligence equals a 21 gram weight loss?”

  1. Adam Harris says:

    Very clever on the 21 grams aspect! I responded to your question earlier on my post and it may not have been what you meant concerning NDE’s after reading your blog.

    “I found a number of web articles where AI has become a go to for hospices to do better “predicting” of hospice eligibility.”

    If this is what you meant, I haven’t looked into that yet. As far as “taking the soul out of what we do”, I think that is a good way to put most of our fears. We want to know there is a human life behind poetry, songs, scripts, narratives and even medical predictions to an extent. I do see how AI could be an excellent tool as long as it stays in its lane, which I think we are figuring out at the moment!

  2. mm Russell Chun says:

    Hi,

    Just as a rabbit trail….Falsely Accused or Caught Using ChatGPT? Here’s What To Do.

    Academic integrity has been an important topic in the realm of AI tools like ChatGPT. Students have been getting accused of cheating based on metrics that aren’t definitive. If you’re accused, how should you go about it to prove your innocence?
    https://goldpenguin.org/blog/falsely-accused-of-using-chatgpt/#:~:text=To%20effectively%20defend%20yourself%20against%20false%20accusations%2C%20it,ChatGPT.%20Here%27s%20how%20you%20can%20gather%20relevant%20information%3A

    On a more thoughtful side….Proverbs 22:6 Start children off on the way they should go, and even when they are old they will not turn from it.

    If we are teaching AI, is it okay to start teaching it Morality, Ethics of behavior – such as “Young AI, if you cannot reference it properly, don’t suggest it to gullible users.”

    Michael Webb reminds us that we are training AI. IF AI becomes self-aware (we see it in all the movies), then AI will need that moral/ethical database to draw upon. The understanding of what is “right versus wrong” is absolutely key. Then we might not only have child that is smart, but also wise.

    Shalom…

Leave a Reply