DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Generative AI and Higher Education

Written by: on September 3, 2023

“Humans have been collaborating with technology for writing since sticks were used for drawing in sand or on cave walls.” [1] 

The expanding use of technology in higher education is inevitable. The question is how to use it well. The most obvious pitfalls include the potential for plagiarism and consequently that students will undermine their own learning. Obviously, plagiarism is unacceptable on any level, but as far as students undermining their own learning, they may simply need help reframing. That is to say, they need to understand how tools like Chat GPT can be used as an aid to learning rather than a way to avoid doing their work.


From the assigned video, another danger that David Boud mentions is a possible knee-jerk reaction on the part of academics that would bring about a regression to exam-based evaluations. In other words, if professors combat the possibility of plagiarism by reverting to in-person, hand-written exams, Boud laments this as a step in the wrong direction. Interestingly, he points out that the goal of education is to equip graduates for real-world tasks which certainly now include working appropriately with AI. We would do well to integrate these technological advances into higher education so that students learn to use them well.


Indeed, as nearly all experts seem to agree, artificial intelligence and specifically Chat GPT is an inevitability. It is already a reality in higher education and many other fields. If there is no avoiding it, the best approach is probably to work with it, adjust our processes and procedures to account for it, and embrace its possibilities.


The most obvious limitation is Chat GPT’s tendency to hallucinate. Being “probabilistic…generative AI doesn’t live in a context of “right and wrong” but rather “more and less likely.” [2] Factual inaccuracy is a serious limitation for students using AI as a research tool or a writing aid. Perhaps less obviously, hallucinations affect users in many other fields. For example, I was talking with a friend who is a computer programmer, and he shared his experience with Chat GPT. He has seen it respond to a prompt and produce code that appears perfectly reasonable. It can even explain the code in ways that sound cogent. But when my friend tried to run the code it was complete nonsense. In computer programming, as in disciplines like the humanities or social sciences, it seems unwise to rely on generative AI unless one is highly competent to spot the errors.


All that said, there are numerous possibilities for appropriate and effective use of generative AI in a higher education context. As David Boud highlights, this technology could significantly help those with learning difficulties. I’m not sure what he had in mind when he made this suggestion, but I can imagine using Chat GPT to generate ideas for an assignment or even envision how an essay might be organized. Personally, I anticipate using Chat GPT to help me create content in my second language. Precisely because of its predictive nature, it could help me craft my ideas into French phrases and formulations that sound perfectly native-like.


Ultimately the greatest possibility of generative AI may be to increase a student’s (or professor’s or professional’s) overall efficiency. If we can begin to outsource some of the simpler, repetitive tasks or those tasks that aren’t necessarily accuracy-based, we can free up time for higher-level thinking. The example that comes to mind is a task like writing a cover letter. Chat GPT could likely write a clear, simple cover letter in a fraction of the time it would take me to labor over choosing every perfect word. You can likely think of several other examples in academics and in everyday life, so I’m curious to hear from you. What tasks could you imagine, or maybe you have already tried, offloading to Chat GTP?


I’ll conclude by sharing my very first experience with Chat GPT, because it is illustrative of several principles we’ve discussed. When Chat GTP 4 was released, my husband was telling me about this brand new thing, and I was trying to wrap my head around it. At the time I was knee-deep in the historical context section of my topic expertise essay. I learn best by concrete examples, so my husband typed in the topic I was working on, namely the history of evangelicalism in France. Two interesting things happened. Firstly, the topic sentence that I had already written in my essay was nearly identical to Chat GTP’s topic sentence. Secondly, the text that Chat GTP spit out actually highlighted a whole aspect of French evangelicalism that I hadn’t thought to cover. I was a bit embarrassed that I had just skipped over this really essential aspect of the history, but thanks to Chat GTP I researched more and added to my essay.


In the end, there are some valuable possibilities that come with generative AI. This potential does not negate the dangers, but if we are aware of the limits this technology can become a powerful tool in our educational tool belt.

 

___________________

  1. Lucinda McKnight, “Eight Ways to Engage with AI Writers in Higher Education,” The Times Higher Education, October 14, 2022.  https://www.timeshighereducation.com/campus/eight-ways-engage-ai-writers-higher-education

  2. Haomiao Huang, “How ChatGPT Turned Generative AI into an ‘Anything Tool,’” Ars Technica, August 23, 2023, https://arstechnica.com/ai/2023/08/how-chatgpt-turned-generative-ai-into-an-anything-tool/.

     

About the Author

mm

Kim Sanford

8 responses to “Generative AI and Higher Education”

  1. mm Russell Chun says:

    Hi Kim,
    I guess we are all going to be experimenting with ChatGPT as we go along. I had some recent college graduates over to my home for dinner. I asked them about AI and they were quite frank that they had used AI to write essays for them. Especially for those classes that were outside of their primary area of interest. In short, if it the class didn’t matter to them, then they were quite willing to use AI and save their brain cells for the things that mattered.

    Hmmmm….as an instructor I am going to have to look at this more closely. I do like Boud’s approach to assessment.
    Assure, Assure that learning outcomes have been met Summative Assessment
    Enable, Enable students to use information to aid their learning now
    Build, Build students capacity to judge their own learning.

    As an instructor this forces me to reevaluate my own lesson objectives and assessment (not a bad thing).

    Shalom….

  2. Esther Edwards says:

    Hi, Kim,
    I remember you mentioning AI when our cohort was together but had no idea of how to access it. However, these videos have sparked my curiosity to draw from AI in my search for more research. I’m not sure I will rely on it factually, but can see how it could open greater avenues of what is available in my research. Your closing statement of being aware of AI’s limits gives wise advice as we all step out into the ocean of AI a bit more in our doctoral work.
    Thank you for your well-written post. I look forward to spending time together in Oxford!

  3. mm Tim Clark says:

    Kim, I really resonate with this thought:

    “Ultimately the greatest possibility of generative AI may be to increase a student’s (or professor’s or professional’s) overall efficiency. If we can begin to outsource some of the simpler, repetitive tasks or those tasks that aren’t necessarily accuracy-based, we can free up time for higher-level thinking.”

    This is where I’m landing on AI. I think for repetitive, low level tasks it makes sense to use a tool. I think for critical and creative work, there is something of soul that is so necessary. Not sure I won’t be in the minority on that opinion, but I’m content with letting the world pass me by but keeping my soul.

    • mm Kim Sanford says:

      Well “being in the minority on that opinion, but I’m content with letting the world pass me by but keeping my soul” seems like a pretty good description of our counter-cultural Kingdom values so I think that’s not a bad approach.
      But I am curious, as a pastor what tasks could you envision outsourcing to AI ?

      • mm Tim Clark says:

        Good question Kim. Contrary to what my post may communicate, I’m not a Luddite, but because I’m often so eager to be the early adapter in technology issues, I’ve started to question the unintended consequences of doing so.

        I think right now I would encourage pastors to use AI as a supercharged search engine, and discourage them to use it for sermon writing–not only asking AI to write a sermon, but to use it for any type of background study.

        Again, not saying that the background study wouldn’t be a valid use in the future, but at the moment we can’t assume that what we are getting from AI, as well as the process of using it itself, will cultivate the spiritual fruit that we hope for, or will support our souls thriving long term.

  4. Jenny Dooley says:

    Hi Kim, I resonate with this statement, “..it seems unwise to rely on generative AI unless one is highly competent to spot the errors.” I do not feel competent to find the errors. I need instruction. This concerns me especially for younger students. I wonder when my grandchildren will be exposed to generative misinformation and how they will learn to spot it. Have your children encountered AI writer’s such as ChatGPT? I wonder at what age we need to teach our children and grandchildren about yet another harmful misuse of advancing technology.

    • mm Kim Sanford says:

      Good question, Jenny. I’m a little embarrassed to admit, despite all this discussion about AI, I hadn’t even thought to ask my children about their experiences with it. So I’ve just chatted with my 8th grader and he says that one teacher specifically forbids it. Other than that, a couple of friends have talked about it, but he doesn’t think anybody is really using it. It does seem like teaching children and teens about AI is kind of similar to how we approached internet use – not appropriate before a certain age then ok with a lot of supervision until we determine they are mature enough to make good decisions.

  5. Hey Kim. Brilliant as usual! Your section on students undermining their learning was insightful. We tend to not think about that part in regard to using AI. When we undermine our own learning, we actually begin to erode part of our giftedness. Since God has blessed us to be a blessing, AI can erode what we can do so well, which is learning. We have less of an influence on others personally, when we constantly rely on AI.
    This is why your thoughts about AI as an aid to learning hit it out the park. Sorry, that’s an American term for “Absolutely Brilliant” But you already knew that. 😊
    So how would you emphasize to others about using AI as an AID to learning rather than as a way to avoid their work?

Leave a Reply