Higher Education in the Spotlight of the AI Surge
- Lawrence Ng

- 3 hours ago
- 5 min read
The use of AI among students in higher education has seen a dramatic rise, according to a 2025 finding from the UK-based independent think tank, HEPI (Higher Education Policy Institute). A remarkable 92% of students now use AI in some capacity, marking a staggering increase from 66% recorded in 2024. As Generative AI floods university campuses, a pressing question emerges: Are academic institutions genuinely evolving, or are they simply scrambling to protect an outdated system? This rapid change has created a pivotal moment for universities worldwide, placing traditional educational models under intense and necessary scrutiny.

Responding to this surge, Dr. Jürgen Rudolph, an adjunct lecturer with Murdoch University in Singapore, voices the urgency of this reality, noting how the ubiquitous nature of generative technology creates deep logistical and philosophical challenges for educators. He explains, "GenAI produces a genuine headache: it is increasingly difficult to determine whether a student wrote an assignment, whether a teacher set it, or indeed whether a teacher marked it". According to Dr. Rudolph, the core problem stems from outdated evaluation metrics that rely heavily on unsupervised writing. Expanding on this dilemma, he argues, "The real issue is that traditional assessments were designed for a world in which producing fluent text required genuine human effort. That world no longer exists. When a student can generate a competent essay in seconds, the unsupervised written assignment stops being a reliable measure of learning".
While these traditional methods may be failing, Dr. Rudolph does not believe evaluation should be abandoned entirely; rather, he suggests the crisis is an opportunity for evolution. He advocates for a complete redesign, stating, "What excites me far more is the move towards authentic assessment: tasks rooted in real-world complexity where AI becomes a tool rather than a threat". To be successful, this shift requires educators to generate evidence of genuine learning through "oral examinations, iterative drafts with feedback, project-based work tied to real contexts, and process documentation that makes the learning journey visible".
The urgency to overhaul education is frequently fuelled by the extravagant industry claims that ignore the hallucinations by Generative AI, a premise that Dr. Rudolph vigorously dismantles by questioning the fundamental nature of the technology. When asked if AI possesses more knowledge than human educators, he retorts, "Let me challenge the premise before addressing the question, because the premise is doing a great deal of heavy lifting. Do these models truly possess more knowledge than any human educator? I would argue they do not and that this claim is itself a product of the very hype we ought to be resisting". He views the technology not as an omniscient oracle, but as a flawed helper, explaining, "What GenAI actually resembles is a sycophantic, super-hardworking, and deeply forgetful research assistant. It will work tirelessly, never complain, and agree with almost anything you say - which is precisely what makes it dangerous if left unsupervised".
While the capabilities of AI in specific, contained environments are undeniable and have demonstrated significant value, the universal, verifiable effectiveness and positive contribution across diverse economic and cultural landscapes remain unproven. This nuanced view is shared by Dr. Stefan Popenici, a Sydney-based independent researcher in AI, who challenges the core assumptions driving the modern economic conversation around AI adoption, "What we are witnessing is AI-driven valuation growth - and those are very different things". He elaborates on this precarious financial situation, warning, "The current AI economy has all the hallmarks of a speculative bubble. Nvidia lost $600 billion in a single day when DeepSeek demonstrated that comparable AI could be built at a fraction of the cost... The broader pattern is dismally familiar: extravagant promises, a frenzy of capital, eye-watering prices detached from demonstrated returns". Echoing the scepticism about the sweeping economic narratives of AI, Ms. Shannon Tan, a Senior Lecturer at Amity Global Institute, points out the tragic paradox evident in some academic institutions that have fully embraced this hype. She reveals, "None of this means AI is trivial - it is a genuinely useful technology within specific domains. But the gap between what is marketed and what materialises is vast. One US university announced a $17 million OpenAI partnership whilst simultaneously issuing faculty layoff notices and proposing $375 million in budget cuts. That is not a transformation. That is institutional auto-cannibalism".
Dr. Popenici concurs that change is necessary but emphasises that it should be deliberate rather than merely reactive. He underscores this need for caution, asserting that the drive to keep pace with industry advancements often overlooks the fundamental purpose of education. He observes, "Yes, institutional cycles are slow, and yes, they need reform. But speed is not intrinsically virtuous. The frantic pace of AI development has produced more wreckage than progress. What accreditation frameworks ought to do is ensure that graduates can evaluate these tools critically, not merely keep pace with the latest release cycle. That is a more durable ambition than chasing weekly updates, and frankly, a more important one."
Dr. Popenici also warns against relying on corporate training programmes to define academic futures and student capabilities. He articulates this concern clearly, observing, "Tech companies are skilled at building training programmes - designed, naturally, so that people use their products. That training will always carry a techno-solutionist, techno-optimistic flavour. It rarely pauses to ask: how much technology should we use, and for what? Universities, at their best, do precisely that".

Consequently, rather than merely teaching students how to operate software, academics are urging a profound shift toward critical understanding. Dr. Fadhil Ismail, Senior Lecturer at Kaplan Higher Education Academy, strongly rejects the idea that mastering system prompts is sufficient for higher education. He clarifies his position extensively by stating, "Critical AI literacy does not begin with tool use. It begins with understanding what AI is and what it is not. GenAI systems perform statistical pattern-matching, not reasoning. They are neither truly artificial - built as they are on vast human labour, from mineral extraction to precarious gig workers annotating data - nor genuinely intelligent". Dr. Ismail stresses that true educational advancement requires recognising that fluent output does not mean accurate output. He insists that institutions must cultivate "the capacity to refuse - to decide that certain intellectual tasks should not be delegated to machines".
Rather than yielding to the temptation of technological fixes, a call is being made to educators to fundamentally reassess and redefine the core mission of the education system. Ms. Tan advocates for a holistic approach where essential human skills are inextricably interwoven with technical knowledge. She maintains, "Empathy, ethical judgment, and metacognition matter enormously. But they need not be taught in opposition to technical knowledge. They should be embedded within it". Acknowledging the need for systemic structural changes without sacrificing academic rigour, Ms. Tan adds, "In an era of breakneck change, lifelong learning is not optional - it is survival. The degree needs reimagining, certainly. But let us not throw out the baby with the bathwater. The capacity for deep, sustained, critical inquiry remains something no six-week certification can replicate".
Ultimately, institutions must avoid rushing into reactive overhauls driven by unproven market forces and software updates. Dr. Rudolph offers a beautifully grounded final thought, advising, "So before we redesign entire education systems around the assumption that AI has permanently restructured the economy, perhaps we should wait for the evidence that it actually has. The most prudent thing education can do right now is equip graduates with critical thinking, adaptability, and the capacity to evaluate hype - precisely the skills needed to survive whatever comes next, bubble or bust".
These critical perspectives and analyses are presented by Dr. Stefan Popenici, Dr. Jürgen Rudolph, Dr. Fadhil Ismail, and Ms. Shannon Tan, who serve as the editors of the recently published Handbook of Artificial Intelligence in Higher Education.


