{"id":33252,"date":"2023-10-08T15:32:29","date_gmt":"2023-10-08T22:32:29","guid":{"rendered":"https:\/\/blogs.georgefox.edu\/dlgp\/?p=33252"},"modified":"2023-10-08T15:32:29","modified_gmt":"2023-10-08T22:32:29","slug":"artificial-intelligence-as-partner-not-foe","status":"publish","type":"post","link":"https:\/\/blogs.georgefox.edu\/dlgp\/artificial-intelligence-as-partner-not-foe\/","title":{"rendered":"Artificial Intelligence as Partner, not Foe"},"content":{"rendered":"<p>Much has been made of the ubiquitous rise of artificial intelligence (AI) over the past year. In fact, it\u2019s difficult to believe that it\u2019s been less than a year since \u201cOpenAI released an early demo of ChatGPT on November 30, 2022, and the chatbot quickly went viral on social media as users shared examples of what it could do.\u201c<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a> From that time forward, there has been no shortage of speculation about how this tool will impact daily life and the future of work and education, as we know it. While there are many facets of AI to explore, this blog focuses on the dangers, limits, and possibilities for my doctoral studies.<\/p>\n<h1>Dangers of AI<\/h1>\n<p>Type \u201cdangers of AI\u201d into Google and you\u2019ll receive no less than 981,000,000 results.\u00a0 ChatGPT itself lists the dangers of AI as: 1) bias and discrimination 2) Lack of transparency and explainability 3) security risks 4) job displacement 5) privacy concerns 6) autonomous weapons 7) moral and ethical decision-making 8) dependence on AI and loss of human skill 9) super intelligent AI and 10) ethical responsibility and accountability. <a href=\"#_ftn2\" name=\"_ftnref2\">[2]<\/a> Of these dangers, I believe most impactful to my doctoral studies are the potential for bias and discrimination and ethical responsibility and accountability.<\/p>\n<p>Artificial intelligence is only as accurate as the information fed to it. And as a student of the human mind, I know that every human being is predisposed to bias.<a href=\"#_ftn3\" name=\"_ftnref3\">[3]<\/a> In our doctoral studies, we are taught to evaluate our sources to ensure we are utilizing diverse and vetted information. Artificial intelligence does no such thing. For many topics, it takes work and time to find opposing views and additional angles. Relying solely on ChatGPT for sources or information poses a danger to ensuring all sides of a topic are considered.<\/p>\n<p>In terms of ethical responsibility and accountability, artificial intelligence has no moral compass.\u00a0 And those that rely on it to exclusively do their academic work, have none, either.\u00a0 Education is not simply about getting assignments done, it\u2019s about learning to think for oneself. Learning to digest information and return it to the world with our own unique perspective and lens.\u00a0 Students that chose to use AI to do their work for them, are forfeiting the very skills they are spending money, time and effort to learn.<\/p>\n<h1>Limits<\/h1>\n<p>Looking at the dangers of AI, it\u2019s tempting to write it off as a tool that should be shunned in the world of education, however that would ironically, also be a danger. Instead of black and white or an all or nothing situation, I would argue that using AI for doctoral studies is more of a gray area where the student understands the limits it poses and gets creative about how to utilize the technology within those limits.\u00a0 For example, in regard to the danger of bias and discrimination discussed above, perhaps using AI as a springboard for research is a great place to begin, while utilizing one\u2019s own brain to seek out additional sources and points of view is a critical next step. Additionally, asking AI to write a complete paper is unethical, but asking it to provide an outline or give several ideas for how to begin may be just the start a tired and tapped-out student needs to gain momentum and get an assignement done.<\/p>\n<h1>Possibilities<\/h1>\n<p>While it\u2019s true that there is no shortage of <em>potential<\/em> pitfalls with AI in education, the discussion above and voices such as Sal Kahn<a href=\"#_ftn4\" name=\"_ftnref4\">[4]<\/a> and David Boud<a href=\"#_ftn5\" name=\"_ftnref5\">[5]<\/a> are proving that there are many possibilities for how it can not only enhance an educational program like a doctorate degree, but perhaps take it to a new level entirely. In my own work, I see the opportunity to use it as a springboard for ideas, a tool to discover sources and topics related to my research that I may have missed on my own, and a productivity tool to save hours doing remedial tasks that it can accomplish in seconds. I choose to carefully view AI as a partner in my doctoral studies, rather than a foe.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> Bernard Marr, \u201cA Short History Of ChatGPT: How We Got To Where We Are Today,\u201d Forbes, accessed September 20, 2023, https:\/\/www.forbes.com\/sites\/bernardmarr\/2023\/05\/19\/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today\/.<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> ChatGPT, response to \u201cdangers of AI,\u201d September 20, 2023, OpenAI, https:\/\/chat.openai.com\/chat.<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> \u201cUnconscious Bias in the Believer,\u201d accessed October 8, 2023, https:\/\/blogs.georgefox.edu\/dlgp\/unconscious-bias-in-the-believer\/.<\/p>\n<p><a href=\"#_ftnref4\" name=\"_ftn4\">[4]<\/a> Sal Khan, \u201cHow AI Could Save (Not Destroy) Education,\u201d TED, Vancouver BC, April 2023,\u00a0<a href=\"https:\/\/youtu.be\/hJP5GqnTrNo?si=UfBAYH343hxgCYaK\">https:\/\/youtu.be\/hJP5GqnTrNo?si=UfBAYH343hxgCYaK<\/a><\/p>\n<p><a href=\"#_ftnref5\" name=\"_ftn5\">[5]<\/a> David Boud, \u201cAssessmentAI,\u201d Foundation Director of the Centre for Research in Assessment and Digital Learning, Deakin University.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Much has been made of the ubiquitous rise of artificial intelligence (AI) over the past year. In fact, it\u2019s difficult to believe that it\u2019s been less than a year since \u201cOpenAI released an early demo of ChatGPT on November 30, 2022, and the chatbot quickly went viral on social media as users shared examples of [&hellip;]<\/p>\n","protected":false},"author":154,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[2347,2548],"class_list":["post-33252","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-dlgp01","tag-ai","cohort-lgp1"],"acf":[],"_links":{"self":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/33252","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/users\/154"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/comments?post=33252"}],"version-history":[{"count":1,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/33252\/revisions"}],"predecessor-version":[{"id":33253,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/posts\/33252\/revisions\/33253"}],"wp:attachment":[{"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/media?parent=33252"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/categories?post=33252"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.georgefox.edu\/dlgp\/wp-json\/wp\/v2\/tags?post=33252"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}