Your cart is currently empty!
by

Dr. Olivia Guest collaborated with Iris van Rooij on a research project with a paper titled “Against the Uncritical Adoption of ‘AI’ Technologies in Academia” (https://zenodo.org/records/17065099). Their work got a ton of attention once it was published on September 5th because their research thoroughly explains the massive harm most colleges and universities are doing to their students, faculty, and the legitmacy of their academic work by pushing Gen AI onto everyone.
Students stop learning, stop thinking, and risk atrophying their brains. Plus, how credible are the degrees that they get after semesters of using ChatGPT? Sorry, now your MSc means absolutely nothing! Professors lose control of the teaching process and lose their ability to assess student work with their brains. This is a sore point for me because I quit the faculty of the Open Institute of Technology (OPIT) because they were trying to force me to use TurnItIn on students. I’m not delegating student bot detection with another fucking bot! Academic work with any connection to Gen AI usage whatsoever is completely illegitimate. Writing an exam with dice rolls and Magic 8 Ball responses would have just as much inaccuracy but without all the environmental destruction of LLMs.
Basically, greedy capitalism driven colleges and universities around the world are encouraging people to stop thinking, let their brains atrophy, deskill themselves, and destroy all the credibility of associated research and credentials.
I’m still trying to get an interview with Iris van Rooij! If you know her or if you are her, please ask them to email me: kim.crawley (at) stopgenai (dot) com.
Anyway, some links…
Against the Uncritical Adoption of ‘AI’ Technologies in Academia: https://zenodo.org/records/17065099
Olivia Guest on Bluesky: https://bsky.app/profile/olivia.science
Guest’s own website: https://olivia.science/
Iris van Rooij on Bluesky: https://bsky.app/profile/irisvanrooij.bsky.social
I’m honoured that Guest gave me the time and effort to answer my questions.
Kim Crawley: How did you get into your academic specialties?
Olivia Guest: There are many answers I can give, but my favourite is one from when I was 18. Back then, I was looking around online — I do not remember what the context was anymore — and I discovered cognitive science’s article on Wikipedia with the hexagon of fields that compose it. I was both incredibly interested in studying it from first sight, and was relieved that computer science (which I was already about to set off to study) was housed under the cognitive science umbrella. I then moved to an MSc in cognitive and decision sciences and a PhD in psychological sciences. Fast forward to now, I am an Assistant Professor of Computational Cognitive Science; and I work in the Department of Cognitive Science and Artificial Intelligence in the Donders Centre for Cognition and the School of Artificial Intelligence at Radboud University in the Netherlands.
Crawley: Do people overlook the importance of humanities to computing?
Guest: I’m not entirely sure anybody specifically overlooks anything more than anything else — all interdisciplinary interactions are fraught with difficulties, both avoidable ones and non. And I would not be qualified to speak about humanities and computing per se, but certainly I can comment on the interactions between the social or special sciences and the more natural or physical sciences. I think the main issues I can speak to as a cognitive scientist is that the relationships between the fields over which cognitive science operates (the well-known to some hexagon consisting of: linguistics, psychology, AI/computer science, neuroscience, philosophy, anthropology) often struggle to have fruitful dialogues. However, also impressively cognitive scientists do manage to communicate well amongst themselves and often enough for our interdiscipline to be preserved from possible fractal misunderstandings tearing us apart — maybe only just in some cases — but the cognitive very much lives on. This not to say of course that we do not need to move to protect some of the edges in our hexagon (Guest, O. & van Rooij, I. (2025). Critical Artificial Intelligence Literacy for Psychologists. PsyArXiv. https://doi.org/10.31234/osf.io/dkrgj_v1).
To flip the question on its head, I would say sometimes the critique of technology, of the mind-machine parallel, of computationalism, lacks bite or even connection to realities on the ground, when it comes from non-experts in computational cognitive science. We also outline this phenomenon here: “Our colleagues have embraced these systems, uncritically incorporating them into their workflows and their classrooms, without input from experts on automation, cognitive science, computer science, gender and diversity studies, human-computer interaction, pedagogy, psychology, and law to name but a few fields with direct relevant expertise (Sloane et al. 2024).” (Guest, O., et al. (2025). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099). Collectively effectively and expertly addressing this would be under the banner of Critical AI Literacy (CAIL).
Crawley: How would you explain to confused people the difference between Gen AI and all the things that were called AI before 2022? (I know there is no formal definition of AI in comp sci.)
Guest: I would start by explaining that distinctions between so-called “generative AI” and other types of statistical models are not actually useful, and in fact play into the technology industry’s false frames and harmful word games. The phrase does not pick out overhyped or harmful models, nor does it pick out models as a function of any formal property they may have without also avoiding picking out innocuous statistical models. If they want to learn more, I would direct them to the work I have done in:
Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
Guest, O. (2025). What Does 'Human-Centred AI' Mean?. arXiv. https://doi.org/10.48550/arXiv.2507.19960
If people want to understand the history of current artificial neural networks versus those from before the 2010s onwards, and from an insider perspective, I’d suggest looking into my work here as a starting point (Guest, O. & Martin, A. E. (in press). A Metatheory of Classical and Modern Connectionism. Psychological Review. https://doi.org/10.31234/osf.io/eaf2z). We explain how connectionism, the theoretical framework that artificial neural networks are part of, has not only a long history, but has also changed in form scientifically in the last 15 or so years.
Crawley: Talk about your recent paper on ed tech that has gotten media buzz.
Guest: It’s wonderful to see laypeople and experts from various academic disciplines engaging with that work (Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099). I am still a bit shocked, pleasantly surprised, and overwhelmed at the tens of thousands of downloads and views. The backstory is partially explained in (Guest, O., van Rooij, I, Müller, B., & Suarez, M. (2025). No AI Gods, No AI Masters. https://www.civicsoftechnology.org/blog/no-ai-gods-no-ai-masters), although that’s more about the related Open Letter. In general, I would say there is a tide we need to turn back, and we will try our best to do that, because academia is a valuable part of society and as academics it’s our duty to, we are paid to, protect it.
Crawley: I recently quit OPIT’s faculty because they were forcing me to use TurnItIn to detect student Gen AI use. I can detect Gen AI with my brain and I don’t want to be part of a nasty phenomenon where students are using bots instead of thinking, and teachers respond with bots instead of thinking. I believe even students who agree to use “e-proctoring” software should be shamed.
The emphasis on people’s interpretation of your work seems to be on students using Gen AI to cheat (all Gen AI is cheating in my opinion). But what about the skill loss from faculty using Gen AI to grade?
Guest: I completely agree that skill loss — and in fact very deep deskilling — is what is at stake and is already happening. Moreover, I would also urge fellow academics to embrace ungrading, to leave numerical grades behind, and even to turn to ways of caring for students by involving them (with supervision of course) through, for example, giving feedback to each other. Issues like deskilling of ourselves, students, and society at large occur because we were already vulnerable to an extent to such industry capture. To address these I would ask us, as researchers and educators, to cultivate building more mutual values with each other, our students and mentees, and with university governance.
Crawley: Any additional thoughts?
Guest: If people want to stay up-to-date on Critical AI Literacy, please check out: https://olivia.science/ai — and watch this space, I guess. Thanks!
If you are an activist, academic, technologist, writer, or artist who is bringing attention to how fucked up Gen AI is and you’d like to be interviewed for this series, email Kim Crawley at kim.crawley (at) stopgenai.com. I will definitely link to and plug your work if you’re cool!
Sign up for our newsletter!
All we need is your email address; no other information.
Newsletters are mainly based on our blog, and emails are sent out on average 1-2 times per month: