DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Artificial Intelligence as Partner, not Foe

Written by: on October 8, 2023

Much has been made of the ubiquitous rise of artificial intelligence (AI) over the past year. In fact, it’s difficult to believe that it’s been less than a year since “OpenAI released an early demo of ChatGPT on November 30, 2022, and the chatbot quickly went viral on social media as users shared examples of what it could do.“[1] From that time forward, there has been no shortage of speculation about how this tool will impact daily life and the future of work and education, as we know it. While there are many facets of AI to explore, this blog focuses on the dangers, limits, and possibilities for my doctoral studies.

Dangers of AI

Type “dangers of AI” into Google and you’ll receive no less than 981,000,000 results.  ChatGPT itself lists the dangers of AI as: 1) bias and discrimination 2) Lack of transparency and explainability 3) security risks 4) job displacement 5) privacy concerns 6) autonomous weapons 7) moral and ethical decision-making 8) dependence on AI and loss of human skill 9) super intelligent AI and 10) ethical responsibility and accountability. [2] Of these dangers, I believe most impactful to my doctoral studies are the potential for bias and discrimination and ethical responsibility and accountability.

Artificial intelligence is only as accurate as the information fed to it. And as a student of the human mind, I know that every human being is predisposed to bias.[3] In our doctoral studies, we are taught to evaluate our sources to ensure we are utilizing diverse and vetted information. Artificial intelligence does no such thing. For many topics, it takes work and time to find opposing views and additional angles. Relying solely on ChatGPT for sources or information poses a danger to ensuring all sides of a topic are considered.

In terms of ethical responsibility and accountability, artificial intelligence has no moral compass.  And those that rely on it to exclusively do their academic work, have none, either.  Education is not simply about getting assignments done, it’s about learning to think for oneself. Learning to digest information and return it to the world with our own unique perspective and lens.  Students that chose to use AI to do their work for them, are forfeiting the very skills they are spending money, time and effort to learn.

Limits

Looking at the dangers of AI, it’s tempting to write it off as a tool that should be shunned in the world of education, however that would ironically, also be a danger. Instead of black and white or an all or nothing situation, I would argue that using AI for doctoral studies is more of a gray area where the student understands the limits it poses and gets creative about how to utilize the technology within those limits.  For example, in regard to the danger of bias and discrimination discussed above, perhaps using AI as a springboard for research is a great place to begin, while utilizing one’s own brain to seek out additional sources and points of view is a critical next step. Additionally, asking AI to write a complete paper is unethical, but asking it to provide an outline or give several ideas for how to begin may be just the start a tired and tapped-out student needs to gain momentum and get an assignement done.

Possibilities

While it’s true that there is no shortage of potential pitfalls with AI in education, the discussion above and voices such as Sal Kahn[4] and David Boud[5] are proving that there are many possibilities for how it can not only enhance an educational program like a doctorate degree, but perhaps take it to a new level entirely. In my own work, I see the opportunity to use it as a springboard for ideas, a tool to discover sources and topics related to my research that I may have missed on my own, and a productivity tool to save hours doing remedial tasks that it can accomplish in seconds. I choose to carefully view AI as a partner in my doctoral studies, rather than a foe.

 

[1] Bernard Marr, “A Short History Of ChatGPT: How We Got To Where We Are Today,” Forbes, accessed September 20, 2023, https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/.

[2] ChatGPT, response to “dangers of AI,” September 20, 2023, OpenAI, https://chat.openai.com/chat.

[3] “Unconscious Bias in the Believer,” accessed October 8, 2023, https://blogs.georgefox.edu/dlgp/unconscious-bias-in-the-believer/.

[4] Sal Khan, “How AI Could Save (Not Destroy) Education,” TED, Vancouver BC, April 2023, https://youtu.be/hJP5GqnTrNo?si=UfBAYH343hxgCYaK

[5] David Boud, “AssessmentAI,” Foundation Director of the Centre for Research in Assessment and Digital Learning, Deakin University.

About the Author

mm

Laura Fleetwood

Laura Fleetwood is a Christian creative, certified Enneagram Coach, doctoral student at Portland Seminary and Creative Director at her home church, Messiah St. Charles. As a published author, national faith speaker, podcaster and self-described anxiety warrior, Laura uses storytelling to teach you how to seek the S T I L L in the midst of your chaotic life. Find Laura at www.seekingthestill.com

Leave a Reply