DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

AI: Enhance and Not Harm

Written by: on September 7, 2023

At my daughter’s graduation from Seattle Pacific University four years ago, the commencement speaker, Skip Li, spent the first half of his speech presenting “The Top Ten Lessons in Life,” and then he took an abrupt turn and used the second half of his speech to warn the audience of the dangers of artificial intelligence (AI).[1] We all thought that to be an odd graduation speech. What we did not know at the time is that the AI genie was already out of the bottle, and as David Boud remarks, there is no putting that genie back.[2]

As the use of AI is becoming more prevalent and we’re learning more about its potential, some, like Skip Li, fear this technology is leading us into a “moral wilderness.”[3] Other people see AI as a tool that can enhance human intelligence, potential, and purpose.”[4] Sal Khan, founder of Khan Academy, notes, “I think we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”[5] Like almost all new technologies, the possibilities for abuse or enhancement are endless.

Contemplating the Weaknesses and Strengths of AI Usage in our Studies

The topic of AI is enormous, with many avenues to pursue.  For this blog, I will specifically focus on the dangers, limits, and possibilities of AI for my studies at Portland Seminary.

Limitations and Dangers

            When I think about using generative AI, it is helpful for me to understand how it works, so that I can then better understand its limitations, dangers, possibilities, and potential. “Generative AI is a type of artificial intelligence that uses machine learning algorithms to create new and original content like images, videos, text, and audio.”[6] Open AI, creator of ChatGPT, says, “We build our generative models using a technology called deep learning, which leverages large amounts of data to train an AI system to perform a task.”[7] Through a collection of large amounts of data, AI programs producing text, for example, are trained to predict the probability of next words and make a choice as to the correct word. “During the training phase, the model learns the underlying patterns and structures present in the training data.”[8]

Because the AI application is only as good as the data set used in its training, limitations can include social biases and outdated material. Lucinda Knight, of Deakin University points out that social biases occur in AI output when biases and prejudices are embedded in the data used to train the AI model and when certain perspectives, people, and situational content are left out of the data set used for training.[9] Limitations can also include “AI hallucinations,” which occur when the system produces “nonfactual responses” as a result of poor training or the system’s inability to interpret data correctly.[10] Another limitation is that the quality of the response is only as good as the query used to generate that response; and, AI is not known to be good at calculating facts, specifically in the discipline of mathematics.[11]

Some of the dangers to my studies, following the above limitations of AI, include spreading implicit and explicit bias, using information that is not up to date, and using data that is false. Another danger that I see in using AI, comes out of its limitation to operate as a thinking, feeling entity.  AI does not think or feel. It operates through predictive algorithms, and yet, it can appear to be responsive to, interested in, and even curious about us.  Khan Academy’s new Khanmigo function offers students the benefits of having a tutor that operates interactively.  I think there could be the tendency to relate to AI creations as caring beings, and yet, we cannot have a meaningful relationship with an artificially generated “person.” There are benefits to learning from an AI tutor or counselor, but I think we need to always be aware of the dangers of human beings relating to AI creations as humans.

Possibilities

There are many possibilities for AI use in my studies. I am interested in experimenting with this tool and am curious about the effectiveness of AI regarding my NPO project-related disciplines. I will need to do some research in these areas. As well, I will need to learn to make quality AI prompts and judge the quality of AI outputs.[12]

Areas in which to I want to try engaging AI, capitalizing on its strengths, include:

  • “Off-loading” routine work, such as summarizing a book.[13]
  • Seeking new and creative ideas for project development that could serve as a launch pad to new concepts I would not have otherwise discovered.
  • Experiment in using AI as a researcher, asking AI applications to produce information on various topics and asking various AI applications to create text on one topic, so as to compare the answers.[14]

Conclusion

I am usually skeptical of new technology and slow to adopt the latest watches, phones, and computers.  However, after learning some basic facts about AI, I am looking forward to exploring how to weave this technology into my learning.

As a society, I am now convinced that we must become familiar with AI and together, establish positive uses for this tool, strongly asserting boundaries for positive implementation and discouraging harmful implementation.  As Sal Khan stresses in his TED presentation, those invested in the positive uses of AI must fight to ensure we put in place the guardrails and “reasonable regulations” needed to enhance and not harm human intelligence, potential, and purpose.[15]

[1] Chi-Dooh “Skip” Li, Seattle Pacific University Commencement Speech, University of Washington Campus, June 8, 2019.

[2] David Boud, “AssessmentAI,” Foundation Director of the Centre for Research in Assessment and Digital Learning, Deakin University.

[3] Li, Seattle Pacific University Commencement Speech, 2019.

[4] Sal Khan, “How AI Could Save (Not Destroy) Education,” TED, Vancouver BC, April 2023, https://youtu.be/hJP5GqnTrNo?si=UfBAYH343hxgCYaK, 15:20.

[5] Khan, 00:42.

[6] Boud, 56:27.

[7] OpenAI Website, https://openai.com/research/overview.

[8] OpenAI Website, https://chat.openai.com/?model=text-davinci-002-render-sha, generated through the Query, “How does generative AI work?”

[9] Lucinda McKnight, “Eight ways to engage with AI writers in higher education,” Times Higher Education, October 14, 2022, https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education.

[10] Boud, 1:06:24.

[11] Boud, 1:50; Michael Webb, “Chat GPT-3 and Its Impact on Education,” 25:40.

[12] Boud, 56:27.

[13] Boud, 49:02.

[14] McKnight, https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education.

[15] Khan, 15:20.

About the Author

Jenny Steinbrenner Hale

11 responses to “AI: Enhance and Not Harm”

  1. mm David Beavis says:

    In regards to AI’s usage of faulty, biased information – thus regurgitating biased content – AI is like a mirror of us. Therefore, we need to be careful with the content we receive from generative AI, just as we ought to be critical and check for biases with human-produced content. What are some steps we can take to be critical of the content we receive from generative AI?

    • Jenny Steinbrenner Hale says:

      Hi David, That is such a good point. Thanks for your comment regarding AI mirroring human input. Do you think it’s possible that AI can generate ideas beyond human input?

      Also, good question regarding how we can be critical of the content we receive from AI. This is a skill I want to develop and improve over time. I think in the same way we critique information from humans, we can critique information from AI. Does it line up with values I think are important? Does the content reflect the research of a particular discipline? How does the content compare across AI applications and non AI applications? There are some quick thoughts, but I definitely want to learn more deeply on this topic. Thanks, David. Appreciate your thoughts!

      • Kristy Newport says:

        Jenny,
        Great questions in response to David’s comments.
        I believe framing good questions will be key in determining the best use of AI and limiting it’s dangers.
        You propose:
        Does it line up with values I think are important? Does the content reflect the research of a particular discipline? How does the content compare across AI applications and non AI applications?

        I like these.

    • Kristy Newport says:

      “AI is a mirror of us.”

      I like your thoughts David.
      I believe I need to revisit the book we read last semester on Bias.

  2. mm Becca Hald says:

    Jenny, I love the way you applied the use of AI to your doctoral studies. I think we will be seeing more and more use of AI in the academic realm. My husband works in tech, so we have always been at the forefront of new technology (I got an iPhone the very first day they came out at the same store as Steve Wozniak), but it is not always easy to learn how to use. Are there specific aspects of your project which would benefit from the use of AI?

  3. Jenny Steinbrenner Hale says:

    Hi Becca, Thanks for your comments and thoughts. Regarding my project, I am interested to see what AI can generate about the benefits of nature on human health. I’m also interested to see if AI might be able to generate some creative ways, that I’ve not yet thought of, to deepen our relationship with God and improve overall health. Appreciate your question!

    Hope your week is going well. Can’t wait to join you soon!

  4. Hi Jenny, thanks for a great post here, and can’t agree more with your comment here. ” There are benefits to learning from an AI tutor or counselor, but I think we need to always be aware of the dangers of human beings relating to AI creations as humans.”
    It won’t take long before one realizes the difference between humans and AI systems. We need one another as humans and nothing in the world can replace that fact.

  5. Jenny Steinbrenner Hale says:

    Thank you, Jean, for your comment. I like how you point out that we need one another as humans and nothing can replace that. It will be so interesting to see how AI develops and how we work to keep it helpful and not harmful. I wonder if the human spirit will be resilient enough to remain true to valuing human connection as we navigate numerous technological changes.

  6. mm Shonell Dillon says:

    I think that AI will prove to be beneficial in the future. I enjoyed reading your opinion about this up and coming world wind in technology. Do you think that AI will be a help in constructing the type of program that you want to develop presently?

  7. Jenny Steinbrenner Hale says:

    Hi Shonell, That is such a good question. Someone challenged me to create an app for my program. I’ll have to think about that.

  8. Alana Hayes says:

    I can imagine four years ago… listening to that speech thinking… WHAT IN THE WORLD JUST HAPPENED!?

    “AI hallucinations,” which occur when the system produces “nonfactual responses” as a result of poor training or the system’s inability to interpret data correctly.

    To me… this is the worst case scenario! You almost have to make sure that you are well verse in your topic to make sure that its accurate. How can you fact check such a robust system…?

Leave a Reply