DLGP

Doctor of Leadership in Global Perspectives: Crafting Ministry in an Interconnected World

Sounds like 2004

Written by: on September 10, 2023

Sounds like 2004

Whenever I think about artificial intelligence (AI), I immediately go to the 2004 movie “I, Robot,” starring Will Smith, which explores a future where robots are ubiquitous and highly integrated into human society. However, as the movie progresses, it reveals the dangers of placing too much trust in AI systems. One of my favorite scenes in the movie is when Detective Spooner (Will Smith) speaks to Dr. Lanning; as the conversation goes on, the camera pans and reveals that Detective Spooner is speaking to a hologram of Dr. Lanning and that Dr. Lanning has actually just been murdered. While Dectevice Spooner is speaking to Dr. Lanning, trying to get some answers on what happened, the detective hits a road bump in his line of questioning, and the hologram says, “I’m sorry. My responses are limited. You must ask the right questions.[1]” That phrase, “You must ask the right questions.” is why I think of this movie when I hear conversations or read anything on artificial intelligence.

The Dilemma

“Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics.[2]

One of the most pressing ethical concerns surrounding AI is the issue of bias and fairness. Machine learning algorithms are only as unbiased as the data they are trained on, which often includes systemic prejudices that can perpetuate inequality. These biases can manifest in deeply consequential ways when AI is used in sectors like criminal justice, where an algorithm’s recommendation can significantly affect a person’s life. This issue is compounded by the need for more transparency in many AI systems, making identifying and correcting biases challenging. The urgency for equitable representation in the data and the design of AI technologies cannot be overstated, especially given their broad societal reach. The documentary “Coded Bias[3]” highlights this through the experiences of Joy Buolamwini, a young researcher who discovered algorithmic bias in a facial recognition project.

How does this play into AI and education? As a doctoral student, the limitations and biases inherent in AI can have severe ramifications for the quality and integrity of my research. As David Boud said in his video, “Incorrect information and biases exist in the data sources it draws on.[4]” This would suggest that the AI tools used for research could contain bias in the algorithm that could lead to skewed results, compromising the validity and integrity of my research. This is especially concerning for research that addresses social or human issues, where impartiality is critical. Any conclusions drawn from such compromised data might be challenged or discredited, affecting my research and academic integrity or reputation.

The Benefits

While there are numerous advantages and opportunities to explore, two specific possibilities particularly resonate with me as I approach my final year. I was especially inspired by Sal Khan’s TED Talk, “How AI Could Save (Not Destroy) Education.[5]” In the talk, Khan presents using AI for debate training. For instance, he describes a scenario where a student debates canceling student debt. Though that is undoubtedly an engaging subject, the talk sparked an idea for my research. I plan to debate with theologians who agree and disagree with my central thesis. This AI-facilitated dialogue will refine my arguments and enhance the quality of my overall project. The second one is to use AI for research, as Lucinda McKnight suggested in her article “Eight Ways to Engage with AI writers in Higher Education.” McKnight encourages us to use AI for research: “They can research a topic exhaustively in seconds and compile text for review, along with references for students to follow up. This material can then inform original and carefully referenced student writing.[6]

Like the AI-facilitated debates inspired by Sal Khan’s talk, this second application holds great promise for enhancing the depth and quality of my work as I navigate my final year.

Conclusion

As I excitedly venture into the final stretch of my doctoral journey, AI emerges as a double-edged sword capable of revolutionizing research and education but riddled with ethical complexities (just like the tools that came before it). The challenge lies in responsibly wielding this powerful tool and acknowledging its capabilities and limitations. Like Detective Spooner in “I, Robot,” I am reminded that when engaging with AI, “you must ask the right questions.” As promising as AI’s applications are for enriching my studies and refining my research, the technology is not without its caveats. Whether it is the risk of perpetuating social biases or the broader ethical concerns surrounding data and privacy, caution is warranted. However, the promise and potential of AI cannot be ignored.

 

[1] https://www.youtube.com/watch?v=ZKxr0wyIic4

[2] Elliott D and Soifer E (2022) AI Technologies, Privacy, and Security. Front. Artif. Intell. 5:826737. doi: 10.3389/frai.2022.826737

[3] https://www.netflix.com/title/81328723

[4] David Boud, Assessment AI 27032023.mp4

[5] Sal Khan, “How AI Could Save (Not Destroy) Education | Sal Khan | TED,” accessed September 9, 2023, https://www.youtube.com/watch?v=hJP5GqnTrNo.

[6] Luninda McKnight, “Eight Ways to Engage with AI Writers in Higher Education,” Times Higher Education, October 14, 2022, https://www.timeshighereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-education-respond.

About the Author

mm

Daron George

- Something cool goes here -

6 responses to “Sounds like 2004”

  1. Jenny Steinbrenner Hale says:

    Daron, Thank you for your thoughtful post. I so appreciate how you lay out the challenges, weaknesses, and benefits of using AI in our research. I am especially interested in the dilemma you pointed out in this quote: “One of the most pressing ethical concerns surrounding AI is the issue of bias and fairness. Machine learning algorithms are only as unbiased as the data they are trained on, which often includes systemic prejudices that can perpetuate inequality.” I used a similar point in my blog post as well, and a question from David got me wondering if AI has the potential to not only repeat the human ideas it’s been trained on, but to create new ideas not yet conceived by humans. What do you think?

    Thanks again for your post. Looking forward to being with everyone in Oxford soon!

  2. mm Daron George says:

    Thank you for the kind words on my post! I’m delighted that it resonated with you, especially the part about the ethical challenges surrounding AI bias and fairness. It’s a critical area that we need to navigate carefully as we continue to adopt AI in various sectors.

    David’s question about AI’s ability to generate new ideas that humans haven’t yet conceived is interesting. As of my last look into the technology, AI has shown prowess in synthesizing existing knowledge, but it’s generally not capable of ‘creative thinking’ in the way we humans are. AI can remix, reframe, and reapply data it has been trained on, but originating fundamentally new ideas is still, as far as I understand, beyond its grasp.

    However, technology is rapidly evolving, and who knows what future iterations of AI might be capable of? If it comes to the point of truly conceiving ideas I think I may be terrified.

  3. Michael O'Neill says:

    Awesome post, Daron! I remember thinking i-Robot may have been a small glimpse into the future when it came out. Now Teslas’s robot looks pretty similar. I like a lot of AI and technology in general, but it makes me a tad uncomfortable when they start behind humanoid in style.

    I like the training aspect of it that you brought up. I would like to use it to speed up some things in my life. Thank you!

    See you in Oxford!

  4. Hi Duron,
    Your response in the movie I, Robot, “I’m sorry. My responses are limited. You must ask the right questions.” reminded me of a good number of similar responses that I have received from ChatGPT. Her is a common one that I often get “Hello, I’m an AI language model. so I don’t have feelings, but I’m here to help you with any questions or topics you’d like to discuss.”
    There is a need for caution as we interact with AI indeed.

  5. mm Chad McSwain says:

    Hi Daron
    Great reference to “I, Robot”! I love that all our time researching AI through sci-fi movies is finally paying off!
    Your observation that “you have to ask the right questions” rings true to me as “home base” on this – really any learning endeavor. Given the limitations of models that AI is trained on, and the benefits in education – what do you think is the right question at this point with AI?
    Oh, and you make me want to watch “I,Robot” again” lol

  6. mm Shonell Dillon says:

    Great post, I am a little afraid of the possibilities of AI. Though helpful to the student, as you stated the immature first time criminal may not see favor in such a tool. I would be interested to know what you will be addressing in your project?

Leave a Reply