If nothing else, a criminology degree teaches you that moral transgression, fault and blame are complicated. The notion that individuals are solely responsible for their actions belies the relationships and politics through which those actions might have been learned, proscribed as transgressive, or even felt as desirable in the face of oppression.
Notwithstanding these considerations, many now understand the use of generative AI by students to “cheat” through assignments or “swindle” entire degrees as morally transgressive. And just to be clear: I agree. It is intellectually dishonest, bad for the environment, and robs educators of the most meaningful part of their job. As a criminology student and tutor, I know the displeasure of teaching Foucault’s concept of carcerality, only for some students to submit work using a much more generic reading of biopolitics instead — one I suspect could only have come from ChatGPT. However, being uncertain I can only wonder whether my feedback will mean anything, or if their focus will solely be on the credit grade they’re about to get.
So no, using GenAI is not a morally neutral act. And yet, from a criminological perspective I feel uneasy at how we understand its transgressiveness. After all, moral transgression, fault and blame are complicated, and as good as it may feel to understand cheating or swindling as an act of individual moral failure, it is also missing perspective.
*
We can begin by asking students why they use GenAI. Some say it helps them complete meaningless or tedious assignments. Others suggest it can supplement inadequate explanations from educators. We must avoid taking these comments personally. More than shift blame, they speak to the rampant anti-intellectual, and anti-youth, culture of our time.
Consider the accounts of Queensland teachers faced with not just cheating students, but routine abuse in their classrooms, which have culminated in historic and necessary strikes. Yet, as well as not rushing to meet teachers’ demands, the state government is implementing an increasingly punitive approach to youth crime.
This pattern — of downstream investment in punishment rather than upstream investment in prevention, for example in the form of education — is not new, or unique to the state. Over time, and through many iterations, this pattern has produced an education system rendered compulsory through coercion, rather than desirable in itself. School is no longer an environment where the active, positive presence of students is encouraged, or where the knowledge gained and labour performed within are valued.
What does this allocation of resources say about what we value? What does it teach to students? I do not dispute their accounts of teachers failing to provide meaningful, interesting homework, or sufficient time and attention for each of them to understand everything, nor do I condone their use of AI or the abuse of teachers. However, these behaviours must be understood as products of a culture where academic labour — both their learning, and teachers’ teaching — and by extension the very presence of students and teachers in school, are not valued, or valuable. They are necessary, but reduced to nothing more, taken for granted.
Billingham and Irwin-Rogers argue that, “for a small minority of young people, violence can become a part of their struggle to matter”. Maybe, for a larger minority — perhaps even a majority — of students, the use of GenAI comes from a similar place: as a response to a policy environment where their education, and education itself, does not matter, necessitating action that might otherwise appear drastic.
Thus, we need to be wary of a punitive, and indeed carceral, approach to students’ AI use. A carceral approach is one characterised by mistrust, policing, surveillance and discipline — one that naturalises punitive responses to harm. It pervades our culture and can be embodied by anyone, for example in the desire to “tattle-tale” on cheating classmates. But this is a kind of approach we come to accept and learn from authorities. Consider the fact that even before ChatGPT launched in December 2022, universities were using AI to police students in the name of enforcing academic integrity.
Moral transgression, fault and blame are complicated. Criminologists have long held that transgressive acts are often to some extent learned: when abusers “punish” their partners, it is because they — we — live in a carceral culture that embraces punishment, and teaches us it is the natural response to wrongdoing. Universities’ use of AI to police and punish is similarly carceral in this way.
Now that students have followed suit, we are seeing an “arms race” between writing and detection softwares, into which universities are increasingly investing. At the same time, the university sector is being hollowed out, its workforce decimated, entire departments razed, and again, we see the pattern: of downstream investment in punishment at the expense of upstream investment in education itself. If education does not matter — if education is not funded so that it matters — how else are students to respond? In this crisis, our cultures of anti-intellectualism, anti-youth, and pro-carcerality thus converge.
It is difficult to ignore how much this desire is bound up in the anti-intellectual tendency to view academic achievement as just (“just”) answering questions correctly. For some, to matter is to achieve — to get good grades — and under pressure students may well see GenAI as a solution. This is among the many factors making its use so morally tenuous: it perpetuates the hollowing out of higher education and its intellectual significance.
However, and again, to focus only on AI use and its implications belies the other circumstances that have already hollowed out our universities. Another example is arguably the very existence of “good grades”. What, meaningfully, makes a distinction “high”? How do we know the magnitude of achievement is greater for the student who is awarded a high distinction, versus a regular one — particularly when we know little about their life circumstances?
Some students come into university with the momentum afforded by good life circumstances. Others do not. Eyler writes that“grades are wielded as weapons and rewards.” They reward momentum, combat mobility, and therefore incentivise cheating. In this context, teaching and assessment have been, and should be, subject to scrutiny. Perhaps students should assess each other. Perhaps students and educators should assess together. Perhaps assessment should be based not on pre-set questions easily pasted into ChatGPT, but on live and unique discussions as they play out in class. Raewyn Connell writes: “[universities’] research, their teaching and their operations are a weave of collective labour”. Solutions like these make visible this collectivity.
Fascinatingly, one “traditionalist” critic of Eyler’s book suggested it had “almost no evidence” that these solutions would address their concerns — only to reveal that their concerns came from a combination of Google AI and “a friend”. In reality, the case for transforming or even abolishing grades is well established.
Inevitably, some will criticise transformative changes like this as unrealistic. While I don’t want to accept the terms of this criticism privileging pragmatism over merit, I feel compelled to ask: what really would be more realistic, educators implementing different grading systems in courses they design, or asking, even intimidating students to just stop cheating — and them obeying? Indeed, the carceral abolition movement is framed by similar questions.
Image: Nathaniel Watson