*heart emoji*, *kiss emoji*. I have sent both of these texts unintentionally. The more harmless, *heart emoji*, was sent to my mom and while it was a mistake, it was probably for the best. The *kiss emoji* was sent to an elderly Chinese man who is the coordinating pastor at a church I guest speak at. I was horrified. Both were sent as Google generated auto-responses to texts I received. The two responses highlight some of the interesting uses and consequences of AI in its most popular form, large language models.
First, it’s useful! Honestly, in the right context, both emojis made sense. The heart emoji was in response to my mom saying thank you, and the kiss emoji sent in response to a “have a good night” text. With better inputs than simply a single text, AI does what it’s supposed to do (predict what words string together best) pretty well. And given enough time and training, large language models do even better.
So what’s with the trepidation? Maybe we’ve watched one too many movies like I, Robot where artificial intelligence takes over the world and stamps out human beings. Or maybe, we’re simply scared of the unknown. We have to remember that similar fears to the ones we’re seeing about AI were stoked about the use of Wikipedia in education. “How can we be sure that the content is true?” “You can’t use that for your homework!” “How can we be sure the student is doing their own work?” “You can’t learn like that!” “How will you survive in the workplace?” What’s ironic, especially about the last question, is that these fears are holding us back from evolving and teaching students to use the resources and tools that are available to them. The responsible thing to do is to lean into it. As said by Lydia Liu, “I strongly believe in the need for stakeholders to understand the cyclical effects of AI and education. By understanding how different activities accrue, we have the ability to support virtuous cycles. Otherwise, we will likely allow vicious cycles to perpetuate.”
I want to use a personal hot take to show how we should and should not approach embracing AI in education. Learning Greek and Hebrew isn’t essential as a pastor. And to clarify, I mean learning Greek or Hebrew in a traditional sense of learning a language, by building a large bank of memorized vocabulary and word conjugations/forms. We have incredible resources available to us in the form of pay to use softwares like Logos and free to use websites like Biblehub that will parse sentences and words for us. Shouldn’t we be teaching students to use those tools effectively by allowing them for use on homework and exams? How much time and effort could be saved for similar, if not better, outcomes for students who are trying to integrate original languages into their sermon preparations or Bible studies? Similarly, rather than dying on a hill of tradition that we call “learning”, shouldn’t we help students and educators understand the beneficial uses and pitfalls of AI so we can both learn and teach better? I’m not sure exactly what it’ll look like, but it seems important to start getting more familiar and educating myself on what AI could mean for learning.
 U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023, https://www2.ed.gov/documents/ai-report/ai-report.pdf.