2 minute read

Is the rise of AI technology a threat to the integrity of post-secondary education?

By Dr. Hannah Kirby Wood, Lecturer in Medieval History

With the recent emergence of ChatGPT in the AI landscape, there has been collective handwringing within higher education about how the chatbot carves new paths for academic dishonesty. These concerns are not unjustified: where copy-paste methods of plagiarism usually involve some manipulation of source material to fit a prompt, ChatGPT can produce an articulate response tailored to the exact parameters of a topic in only a few keystrokes. It would take virtually no effort for a plagiarist to plug an essay topic into the chatbot and pass off the material it generates as their own, producing a “paper” that appears to adequately answer the question it sets out to address. The fact that cheating convincingly is often more effort than it’s worth has always been an innate deterrent against academic dishonesty; with that roadblock removed, there is a real fear among some educators that more students will be enticed to take the easy route. The technology of engines like ChatGPT may be novel; the impulse to cheat, however, is not. There will always be those who find ways to game the system, and while these numbers may swell slightly with access to ChatGPT, it is unlikely that most people who are inclined to do the work themselves will decide to cheat simply because there is a new way to do so. Besides, it’s not a given that this smarter technology will outsmart professors: there are certain patterns and tells present in AI-generated writing that aren’t difficult to spot with a bit of exposure. The most troubling thing about AI is not, therefore, its facilitation of academic dishonesty; instead, it’s the devaluation of critical inquiry that AI helps to cultivate.

Advertisement

University educators often struggle to reframe higher education not as a results-based enterprise, but as an exercise in developing fundamental skills such as critical thinking, independent inquiry, and effective communication. As a history professor, I seek to instill in students that there is no correct version of history that we can arrive at, no single interpretation that guarantees an A; just as the study of mathematics isn’t defined by the result of a mathematical problem but rather by the way that you solve it, the study of history – and indeed, of most other disciplines in the humanities and social sciences – is defined by the process of identifying biases, corroborating evidence, and piecing together incomplete information, not by the resulting narrative. When ChatGPT spits out a response about a historical topic, it erases the essential groundwork behind its conclusions and reinforces the impression that there is a canned and static answer that can easily be unlocked.

In my own experimenting with ChatGPT, I’ve noticed that its answers are unnuanced at best and revisionist at worst: a prompt asking ChatGPT to explain King Edward I’s 1290 expulsion of the Jews from his own perspective has Edward acknowledging the harm that was done and committing to “understanding and combating all forms of prejudice and discrimination in the present day” – a gross sanitization of the English king’s antisemitism. Determining the reliability of ChatGPT responses demands careful and critical engagement, but its selling point of convenient omniscience discourages students’ development of these capabilities. AI has the potential to chip away at the integrity of higher education by reducing disciplines to their basest parts, diminishing the perceived importance of the skills a university education is supposed to impart.

This article is from: