Red pill or blue pill?
The phrase “Artificial Intelligence” makes me think about the Terminator films, in which John and Sarah Connor are trying to stop Judgement Day, the day computers become self-aware…
…or the Matrix where the singularity quickly leads to the machine uprising, with AI dominating and tricking humans into believing the world they experience is real when instead they have been cultivated as energy sources for computers.
…or the Spielberg movie called AI that’s a sort of fairy-tale-on-crack that I can’t write about without giving the plot away except to say that it all ends very badly for humanity.
Maybe I watch too many movies (ok, certainly I do), because when AI seemed to exponentially gain traction during the pandemic, I was completely against it. Not only were the post-apocalyptic film scenes stuck in my head, but I also imagined the current iteration of AI would send us quickly into the type of ridiculously dumbed-down culture found in the movie Idiocracy.
Like the character Guy-Am-I in Dr. Seuss’s Green Eggs and Ham, I did not like AI, and I would be the last person to try it. Then late last semester something unexpected happened. I tried it. And as it turns out, I liked it.
What convinced me to try it was a conversation with a friend who is a university professor. I was complaining about a response to one of my blog posts that I strongly suspected was AI generated. My position then was that AI shouldn’t be allowed in a doctoral program.
His response surprised me. He suggested the software itself wasn’t the problem, but that students needed to be assessed for what they were learning while using the tools. That just like calculators, or spell check, or Grammarly, technology could be helpful to effective student outcomes.
That conversation, and my subsequent use of Chat GPT, was a threshold moment.
I now strongly suspect that my friend had already seen the Boud and Webb videos. Instead of trying to summarize those videos (and article), I’ll share a few of my primary takeaways.
First, AI as a learning tool still has significant problems. David Boud warns that AI often includes “false answers and imaginary references”[1] and that when there are problems with biases in the data set the AI is trained on, it will repeat and amplify those biases. Michael Webb claims AI is “good at a lot of things, just not facts and maths.”[2]
Second, AI is quickly getting better. Humanity may be entering a liminal space when it comes to AI, but it seems that very soon the gains in technology will make it a powerful and much more accurate tool. An example of this is evidenced by Sal Khan’s fascinating Ted Talk[3] about how his company has been training generative AI to be used as a tutorial.
Third, the use of AI in education will become as ubiquitous as it already is in other fields. The question isn’t whether students will use AI, it’s how they can use it as a positive force for learning. Boud calls for effective student assessment to ensure learning outcomes are met regardless of the use of AI, and both videos suggested that disallowing AI or trying to beat it through AI detectors is a losing strategy. As Webb said, “why get into a war that we can’t possibly win”.[4]
Fourth, there is disagreement with how AI should be referenced. Linda McKnight claims “the problems start when AI writers are not attributed for their input.”[5] However, Boud suggests that like spell-check or Grammarly, soon AI will be so prevalent that requiring references “will not be realistic or meaningful”, and that it will be impossible to “put the Genie back in the bottle.”[6] However, until there is broad agreement in how to (or not to) attribute AI, I believe that referencing it in our work, like we would with a Wikipedia page, is a safe way forward to avoid plagiarism.
Finally, there is a big difference between knowledge and wisdom. Using AI for cognitive offloading in knowledge acquisition might be useful, but a person still must understand how to assess, use, and apply that knowledge with creativity, analysis, and wisdom.
So, AI isn’t the scary problem I assumed it was, and in the right context it can be very helpful.
I still won’t use AI very much as part of my studies. I have this sick desire to earn my doctorate as analogue as possible; mostly reading (paper books), notetaking (admittedly on my apple notes app), and writing without AI prompts. There is something magical to me about scientists who still do complex math in their head or on a chalkboard even when calculators and computers are readily available. I want to keep my non-assisted skills sharp.
But I admit I found AI to be a really useful supercharged search engine, and I’m not beyond using it for that purpose in this program.
By the way, just for fun I did have Chat GPT “Write a blog post referencing David Boud and Michael Webb answering the question what are dangers, limits and possibilities of AI for my doctoral studies”. It wasn’t half bad. But it didn’t include any movie references, so I stuck with my own version.
[1] I don’t know how to reference the David Boud Assessment AI Video that was shared on Google Drive.
[2] https://www.youtube.com/watch?v=YUNcrSrm47E
[3] https://www.youtube.com/watch?v=hJP5GqnTrNo
[4] https://www.youtube.com/watch?v=YUNcrSrm47E
[5] The Times Higher Education; Campus Resources for Academics and University Staff, Linda McKnight, Eight Ways to Engage with AI Writers in Higher Education, October 14, 2022.
[6] David Boud Assessment AI video
14 responses to “Red pill or blue pill?”
Leave a Reply
You must be logged in to post a comment.
Hi Tim,
I only started looking at ChatGPT when this question was posed by the syllabus, so I am just now venturing into the area.
I tried it some, and found out that I (as the user) have to learn better ways of prompting (asking the question). Sigh, Zotero, Obsidian and now AI – it is making me a little crazy (old dogs and new tricks). But I imagine that I will be using it more and more as I need “inspiration, short cuts, and some mental relief.”
Shalom….
Russ,
That was my learning curve too: figuring out how to write better prompts. It’s amazingly simply once you get that skill down.
My wife still thinks it “sorcery”. 🙂 But she also said she didn’t need a smart phone when I got my first iPhone… I’m sure she’ll come around on this one, too.
Tim,
Like you, I am reading paper books, for the most part, and taking notes through Evernote or a good old Google app (sheets or docs). AI sounds appealing, but it appeals to a side of me that wants to take shortcuts or find “an easy way” instead of learning from the use of it. I still need to develop an appreciation for what I can learn through using AI, I guess. I do use Alexa and Siri for various things around the house. 🙂. I read an article on the Forbes website, “Top Nine Ethical Issues in Artificial Intelligence” (https://www.forbes.com/sites/forbestechcouncil/2022/10/11/top-nine-ethical-issues-in-artificial-intelligence/?sh=59c34c645bc8) stated,
“Let’s not forget that AI systems are created by humans, who can sometimes be very judgmental and biased. Yes, AI, if used correctly, can become a catalyst for positive change, but it can also fuel discrimination. AI has the capability of speed and capacity processing that far exceeds the capabilities of humans; however, due to human influence, it cannot always be trusted to be neutral and fair.” Hmmm. . . Is it worth the investigation? Still considering.
Yeah, Cathy, I think this is a HUGE question, whether what we are getting is accurate. I’m looking at it like Wikipedia… since anyone can edit that, you may be left with information that we should’t use in our research, so it has to be vetted carefully.
Even more concerning to me (and I didn’t think of it until I started reading other’s blogs) is the human/soul element. What do we give up when our primary thinking partner is a computer instead of an image bearer of God? Jury is still out on that one, but I’m going to proceed carefully.
I like your post, Tim, for a lot of reasons, but I feel I need to point out your failure to mention any Marvel movie references.
Seriously, though, I especially liked this: “AI isn’t the scary problem I assumed it was, and in the right context it can be very helpful.” In the era that we are in I find myself (and others) quickly moving into apocalyptic assumptions when considering a new conundrum… and, it seems like new conundrums keep popping up. Am I too naïve in thinking that this principle needs to be applied to multiple scenarios?
Jennifer,
I’m kind of superhero-movied out at this point. Probably explains the lack of reference. 🙂
Agreed that we shouldn’t live in fear of progress.
But I’ve thought of something new since I wrote my post… It seems that on one hand we have non-adoption because of our fear, but when we decide our fears (of AI singularity, for instance) are unfounded then we run the opposite direction and uncritically embrace what we used to be afraid of.
But what if, even if our original fear was unfounded, there was another danger lurking where we didn’t see it? Just because AI probably won’t take over humanity, does’t mean that AI might not be a real problem for how we see ourselves as humans.
Jennifer,
I’m kind of superhero-movied out at this point. Probably explains the lack of reference. 🙂
Agreed that we shouldn’t live in fear of progress.
But I’ve thought of something new since I wrote my post… It seems that on one hand we have non-adoption because of our fear, but when we decide our fears (of AI singularity, for instance) are unfounded then we run the opposite direction and uncritically embrace what we used to be afraid of.
But what if, even if our original fear was unfounded, there was another danger lurking where we didn’t see it? Just because AI probably won’t take over humanity, does’t mean that AI might not be a real problem for how we see ourselves as humans.
Hi Tim, I am with you, “I want to keep my non-assisted skills sharp.” Part of my reason for doing a doctorate in my 60’s was to keep my brain functioning. Learning new things is part of that, but I doubt I will us ChatGPT much at all, and definitely not to write anything. I need a users manual. Like Russell, I don’t know what questions to ask or tasks it can do to get the benefit of cognitive offloading. I saw Esther made a list…I’m heading to her post next! Any tips for using it as a search engine?
Jenny,
You could go to Chat GPT and ask it a question (written out as a sentence) like “can you give me 10 ideas for great restaurants in Oxford that aren’t too expensive?” And in about 5 seconds it will have a list with descriptions.
It’s worth playing around with even if just to get a basic grasp of what it can do.
There are two lines in your post that I will be chewing on for some time:
1. “AI would send us quickly into the type of ridiculously dumbed-down culture…”
2. “There is a big difference between knowledge and wisdom…”
These comments are connected. Even though we are in the “Information Age” we are increasingly dumb in terms of wisdom. Knowledge is increasing at an uncontrollably rapid rate, and yet collectively we are diminishing in wisdom.
I see this in our churches. Far too many people are biblically illiterate. Many more (including myself, admittedly) have a mass disconnect from what we KNOW and what we DO.
John Maxwell has been attributed as saying: “The modern, American Christian is educated far above his/her level of obedience.”
I think it’s going to take some intentional, concerted effort to keep pressing towards wisdom and not just the acquisition of knowledge… the upside is if we live that way we will have something unique to offer the world that is inundated with knowledge alone.
But it’ll take discipline to not just go along with the crowd on this one!
I swear, I haven’t use AI to respond to your posts! That said, I have been tempted at times – not your posts specifically, but all of the posts in general. However, the few times I tried, I found the response to be lackluster and totally not my voice. An tech guy who works at my church suggested I use chatGPT to write a sermon (not sure what this says about my sermons). I gave that a try too. Again, it was terrible. In my experience, and maybe I don’t know how to give good prompts, but chatGPT has been less than helpful in writing anything for school or for church. I think I still have a ways to go before I know how to use these AI tools in a way that is helpful to me! I do want to be clear that these attempts were experiments and I do have a moral code that (probably) would not let me actually have Chatgpt write anything important for me.
I appreciate what your professor friend suggested, that students need to be assessed for what they are learning while using these tools. This is where I think educators are going to need to be creative. I would like to stay on top of this conversation among educators as it is important for the future of our students, and really for all of us.
I’ve experimented a bit with AI and found that it’s a pretty good search engine (when I need to find out or focus on what’s out there or what is being said about a topic, in general) and can be a research tool, but needs to be carefully considered (I tried it for background research for my NPO but then tried to find concrete references and they mostly came up very short or nonexistent).
So I wouldn’t write a sermon or dissertation with it, but I think it can be useful within a framework.
So, AI isn’t the scary problem I assumed it was, and in the right context it can be very helpful. I loved how you worded this. I went into this topic the same way, mostly assuming I’d be 100% against it, and in essence I am. While I would surely bring AI into picture to save me time and correct grammar, if it could be done ethically of course. However, I still think it takes the “soul” out of all of it! It cannot anticipate our thoughts and feelings because it doesn’t know our heart, or our history, or how our day has gone. I think Humans Rule!!