DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

A CEO, an Intern, and Navigating A.I.

Written by: on September 4, 2023

Not long ago, the CEO[1] of a certain organization had to give a speech. That speech would be recorded and played for incoming trainees connected to a particular field of study and a particular university. I asked the CEO about that recording, after he told me he used AI to help write the speech. Three things I know about this CEO – he is well educated, super sharp, and extremely busy. Here’s what he said: “I needed the script in a couple of hours and I asked ChatGPT to write a 3 minute video script talking about the benefits for (trainees) being engaged with (our organization).  ChatGPT knocked it out of the park. I sent it over to the studio and they loaded it in the teleprompter and I didn’t even have to practice. No errors. I only had to add a few sentences about some newer things that the AI generator wouldn’t have been able to find on the internet. This saved me 4 – 5 hours of prep time.”

Lucinda McKnight writes, “In higher education, worries about academic integrity have clouded exploration of the potentials of AI writing. Yet for people in many careers already, working effectively with AI writers is an essential part of their everyday responsibilities.”[2] David Boud seemed to agree with this, stating, “There is often more suspicion of (digital aids) in education than in work…”[3] In the case of my CEO friend, the second part of McKnight’s statement rings true.

Over the summer, I interviewed a 22-year old intern[4] in a fast-growing tech company who uses ChatGPT in daily communication tasks. I asked how he used it. He said, “I used ChatGPT to create a tailored email to reach out to executives at commercial sized companies.” I asked him how many emails he sent each day. “I would say about 30,” he replied. Not only did the company encourage the interns to utilize AI, the sales development representatives conducted seminars to promote its use. They referred to these workshops as “sales enablement.” The intern received excellent performance reviews, and at the end of the internship the company offered him a full-time position.

ChatGPT is not going anywhere. It is a present and future workplace reality. So what are the dangers, limitations, and possibilities? What are the questions educators and students need to grapple with as they use AI? According to the experts, it’s not “if,”  but “when” and “how.”

One question could be: “As students use AI writers like ChatGPT, how will they recognize the validity of the content?” Is ChatGPT going to produce accurate results the way it did for the CEO, every time? Of course not. According to an educators’ guide highlighted in the Michael Webb video, OpenAI “may produce content that perpetuates harmful biases” and “often generates inaccurate information.”[5] “Verifying AI recommendations often requires a high degree of expertise.”[6] David Boud also raised concerns about the students’ ability to recognize the quality of AI outputs.[7] For students accustomed to using AI, this could be considered a danger.

Likewise, over-dependence on AI may inhibit praxis. This is a knowing vs. doing problem. A student may be able to regurgitate information obtained through an AI writer, but is knowing the material the same as mastery? The answer may be overly obvious, but that is indeed a real risk.

Another question could be, “With the widespread use of AI writers in every sector, how can I, the student, prepare to use it or respond to it, as others will undoubtedly use it in their communication to me?” A captain-obvious answer is, “Try it for yourself!” I say that, having yet to experiment with ChatGPT.

I’m also thinking about this question from a teacher’s perspective, as I will be teaching two courses this fall/spring at a local seminary. Maybe I’ll encounter its use among students, or maybe not. The seminary currently prohibits the use of many AI tools and services (yes, even seminarians cheat), but not in every situation. In some cases, the seminary does allow for approval to be granted to the student by the instructor or dean. So for me, the possibility of more (any!) exposure to ChatGPT may promote better understanding of how it works and how a student may use it when writing an essay.

I do like one of the suggestions McKnight gives for educators’ use of AI in research: “They can research a topic exhaustively in seconds and compile text for review, along with references for students to follow up. This material can then inform original and carefully referenced student writing.”[8] As a doctoral student, a possibility could be to include this approach to research with my NPO project beginning this fall.

At this point, I think I have more questions than answers, so one of the “limits” among the dangers, limits, and possibilities for me as a student is the fact that I need more exposure to AI tools and more margin to critically reflect on ethical implications. In that line of thinking, here are a few resources (or request for resources) I would like to explore around the subject:

  1. Robot Souls, by Eve Poole.[9] Yes, the same Eve Poole who wrote Leadersmithing. The book was recently released (August 2023), and here is an interview with Dr. Poole about why she wrote the book: https://www.youtube.com/watch?v=c61iCcLcRoI&t=13s
  2. In the David Boud video, under the “Where does this leave us?” slide, Boud suggests educators should check “that learning outcomes really reflect what all graduates must be able to do.”[10] What authors have tackled this subject? How will educators judge “whether students can demonstrate attainment of learning outcomes to a given standard?”[11] What new or additional research exists around assessment and the measurement of student competencies?
  3. Who are the subject matter experts currently writing about the ethical dimensions of artificial intelligence in a helpful and accessible way?

 

 

 

[1] I told the CEO that I would not disclose his name, organization, or industry. He gave me permission to use the story/interview.

[2] “Eight ways to engage with AI writers in higher education,” by Lucinda McKnight, see: https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education, accessed on August 25, 2023.

[3] See David Boud AssessmentAI video, one of the videos Dr. Jason Clark shared with our cohort for this blog post.

[4] I received permission from the intern to use this story/interview.

[5] See “Chat GPT-3 and it’s impact on Education: Michael Webb,” a video shared by Dr. Jason Clark with our doctoral cohort for this blog post.

[6] See “Chat GPT-3 and it’s impact on Education: Michael Webb,” a video shared by Dr. Jason Clark with our doctoral cohort for this blog post.

[7] See David Boud AssessmentAI video, one of the videos Dr. Jason Clark shared with our cohort for this blog post.

[8] “Eight ways to engage with AI writers in higher education,” by Lucinda McKnight, see: https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education, accessed on August 25, 2023.

[9] See https://www.amazon.com/Robot-Souls-Eve-Poole/dp/1032426624

[10] See David Boud AssessmentAI video, one of the videos Dr. Jason Clark shared with our cohort for this blog post.

[11] See David Boud AssessmentAI video, one of the videos Dr. Jason Clark shared with our cohort for this blog post.

About the Author

Travis Vaughn

10 responses to “A CEO, an Intern, and Navigating A.I.”

  1. mm Russell Chun says:

    Hi Travis,

    Thanks for your educator perspective.

    Here is another (from my blog). “Perhaps fueling our AI fears (think Terminator), is the use of AI in our current U.S. Weapon systems. For instance, “The Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle.

    Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.

    Hmmmm…human sign off. During the first Gulf War, I helped design the Patriot missile movements from Saudi Arabia into Iraq.

    As the VII (7th US Corps) accomplished its tasks, I watched in apprehension as the Patriot Missile was deployed. We had to adjust it to hit tactical ballistic missiles fired at us by Sadam. The Patriot was originally designed to engage aircraft.

    We watched as the system engaged incoming missiles. The PROBLEM was that it launched several million dollar missiles at ONE target. It became clear that a HUMAN had to watch, engage, and over ride herd over the computer. That was in 1991.

    In 2023, we have new AI issues. AI is at the Mexican/US border tracking illegal crossings, it is at being used in conjunction with Canada to track maritime intrusions.

    Sigh…

    A new tool. One that we will have to explore, use and modify as the strengths and weaknesses become apparent.

    Shalom.

    • Travis Vaughn says:

      Russell, thanks for sharing and providing a window into more of your background. With current capabilities, in addition to the human decision-making that came into play in your story from 1991 and currently with the Valkryie drone, I wonder what AI dilemmas are on the horizon within the military regarding the way in which intel is collected and decisions have to be made based on AI-enhanced/aided information. I wonder what DOD safeguards or protocols are in place (or not yet in place?) when it comes to AI services provided by military contractors.

      A bigger question…how are church leaders (e.g., pastors, teachers, small group leaders, etc.) doing when it comes to equipping Christians to live out the implications of their faith, in a variety of callings, including those who serve in the armed forces, and who is talking about this? Much to think about. I look forward to reading your full blog post.

  2. Esther Edwards says:

    Travis,
    I so appreciate all you had to share from an educator’s perspective. I was curious about what Dr. Eva Poole would have to say and was not disappointed. I love her wit and humor along with her ability to articulate the tough subjects.
    Her thoughts on AI and its possible future in the “thinking world” were very well-researched but also gave excellent understanding of humanity and how it differs.
    She states that the “meta hallmark of soul is to be in community” which is something that the AI ability falls very short in, though advances are being made. She states sarcastically at the end that if “while we are on this ridiculous project to replicate and replace ourselves…then let’s do a better job.”

    • Travis Vaughn says:

      I do love the sarcasm and wit in that quote! I have not yet read Poole’s new book (It came out only last month, and alas, it was not a book assigned in our reading list.), but I did decide to include her other book in the syllabus I developed for a course I’m teaching this fall at a nearby seminary.

      I do hope that schools continue to figure out how to promote all of the positive ways that AI can be used to help in education (including seminaries). I hope we can increasingly frame the possibilities with “here is what we (as a school) are FOR” when it comes to AI, while at the same time continuing to be clear regarding the parameters. And, I do think we need to figure out, as image-bearers, all the potential ways the technology can encourage more human interaction, because I agree with what you pointed out in Poole’s comments — humans need to be in community.

  3. mm Kim Sanford says:

    I’ve been struggling to come up with examples of AI doing truly helpful work. In other words, how could it really save me significant amounts of time or increase my efficiency? Your example of the CEO writing a short speech is a perfect example. Not that I write a lot of speeches, but the example helps me open up to some new possibilities.

    You say you haven’t had any personal experience with Chat GPT. I’m curious, is that an intention decision? Do you feel uncomfortable in an ethical sense? Or just haven’t really had the occasion to play around with it?

    • Jennifer Vernam says:

      I can second the experience of that CEO: on multiple occasions I have tasked Chat GPT with things like: “write a vision statement about such and such” or “re-write this paragraph to less than 50 words” etc. Doing this gives me new ways of saying the same thing, and with a slight bit of editing, is ready to share with a group. Its like I have a communications person at my beck-and-call. Granted, its not a very skilled communications person, and I would never not review its product, but it’s cheap.

    • Travis Vaughn says:

      Kim, that’s a great question. No, it’s not an ethical thing at all. I’m really not sure why I haven’t yet tinkered with it. I want to. I’m sure I will at some point soon.

  4. Jennifer Vernam says:

    Travis, one big take away for me in your post is your call out that an “over-dependence on AI may inhibit praxis.” It motivates me to not completely abdicate routine tasks… I need to remain sharp on even the basics.

  5. mm Tim Clark says:

    Travis,

    Your post reminded me that yes, people in the “real world” are already using AI extensively.

    I remember having this conversation when I was the Dean of Students at a Bible College… but it wasn’t about AI, it was about Wikipedia and other suspect under-reviewed internet sources.

    Some students were cribbing their papers from these sources. But we decided against disallowing their use. The ‘real world’ was way ahead of us, and we needed to teach young ministry bound students how to critically use and not abuse this tool.

    Now I will regularly google a question I have but have learned to only use the answer I get in relation to what it’s being used for. In other words, if I need to know “who that actor is and what else he’s been on” a single search in just about any source is sufficient. If I need to know “what does the Bible say about trans-humanism”, I’m going to dive a lot deeper and ensure that sources are robust and peer reviewed.

  6. mm John Fehlen says:

    Like Tim, in his reply to your post, I am wrestling with the utilization of AI in the “real world.” Unbeknownst to me, our Communications Director at church has been using AI for some time to write blurbs, summaries, video text, etc.

    You mentioned David Boud, who stated: “There is often more suspicion of (digital aids) in education than in work…”. I agree with his conclusion, but ask a follow up: how should we feel about it in churches that are a blend of work and education (faith, spiritual development)? Is it OK for the communications department to use it for blurbs, summaries, video text, but not OK for me to use it in sermon prep? Asking for a friend.

    For me: this feels similar to when Sermons.com first came on the scene. I revolted at the thought of downloading someone else’s sermon (and still do feel that way). Now I can have a BOT write one for me in a few seconds. Is there a difference? I think it’s the same – both didn’t come from MY heart and hand. I want it to come from me, from what God is doing IN me. However, both the downloaded sermon and AI generated sermon could serve as inspiration, research, motivation, etc. From that perspective, they are incredible resources.

    What do you think?

Leave a Reply