There’s no simple solution to universities’ AI worries | Letters

TruthLens AI Suggested Headline:

"Universities Face Complex Challenges in Addressing AI Cheating Concerns"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 8.5
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

The ongoing debate surrounding the use of generative AI in higher education has highlighted significant challenges faced by universities in addressing academic integrity. Dr. Craig Reeves argues that institutions are hesitant to confront AI-driven cheating due to financial dependencies on international students, who play a crucial role in supporting the UK's higher education sector. However, the assertion that universities could easily identify AI-generated work is misleading. Research indicates that AI detection tools, such as those mentioned by Reeves, have proven to be unreliable. A recent study revealed that these tools accurately identified AI usage in less than 40% of cases, and even lower in adversarial situations where the use of AI was intentionally obscured. This raises questions about the feasibility of relying on such technology to enforce academic standards effectively.

As universities grapple with these complexities, some are opting for secure assessments, including in-person exams, while others are adapting their evaluation methods to account for the potential use of AI. Critics, including Professor Paul Johnson, emphasize the need for a thoughtful approach to assessment that considers the limitations of traditional examinations. Johnson and others advocate for more analytical assessments that challenge students to engage with new material in a limited timeframe, rather than relying solely on conventional essay formats. This shift could help mitigate the risks posed by AI while fostering critical thinking and application of knowledge. Overall, the discourse reflects a broader concern about balancing academic integrity with the evolving landscape of technology in education, urging institutions to reconsider their assessment strategies in light of these developments.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

I enjoyed the letter from Dr Craig Reeves (17 June) in which he argues that higher education institutions are consciously choosing not to addresswidespread cheating using generative AIso as not to sacrifice revenues from international students. He is right that international students are propping up the UK’s universities, of whichmore than two-fifthswill be in deficit by the end of this academic year. But it is untrue that universities could simply spot AI cheating if they wanted to. Dr Reeves says that they should use AI detectors, but the studies that he quotes rebut this argument.

The last study he cites (Perkins et al, 2024) shows that AI detectors were accurate in fewer than 40% of cases, and that this fell to just 22% of “adversarial” cases – when the use of AI was deliberately obscured. In other words, AI detectors failed to spot that AI had been used three‑quarters of the time.

That is why it is wrong to say there is a simple solution to the generative AI problem. Some universities are pursuing academic misconduct cases with verve against students who use AI. But because AI leaves no trace, it is almost impossible to definitively show that a student used AI, unless they admit it.

In the meantime, institutions are switching to “secure” assessments, such as the in-person exams he celebrates. Others are designing assessments assuming students will use AI. No one is saying universities have got everything right. But we shouldn’t assume conspiracy when confusion is the simpler explanation.Josh FreemanPolicy manager, Higher Education Policy Institute; author,Student Generative AI Survey 2025

The use of AI to “write” things in higher education has prompted significant research and discussion in institutions, and the accurate reporting of that research is obviously important. Craig Reeves mentions three papers in support of theTurnitin AI checker, claiming that universities opted out of this function without testing it because of fears over false positive flagging of human-written texts as AI generated. One of those papers says: “The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text” (Weber-Wulff et al); and a second found Turnitin to be the second worst of the seven AI detectors tested for flagging AI generated texts, with 84% undetected (Perkins et al). An AI detector can easily avoid false positives by not flagging any texts.

We need to think carefully about how we are going to assess work, when at a click almost limitless superficially plausible text can be produced.Prof Paul JohnsonUniversity of Chester

In an otherwise well thought out critique of the apparent (and possibly convenient) blind spot higher education has for the use of AI, Craig Reeves appears to be encouraging a return to traditional examinations as a means of rooting out the issue.

While I sympathise (and believe strongly that something should be done), I hope that this return to older practices will not happen in a “one size fits all” manner. I have marked examinations for well over 30 years. During that period I have regularly been impressed by students’ understanding of a topic; I can remember only enjoying reading one examination essay. The others, no matter how good, read like paranoid streams of consciousness. A central transferable skill that degrees in the humanities offer is the ability to write well and cogently about any given topic after research. Examinations don’t – can’t – offer that.

I would call for a move towards more analytical assessment, where students are faced with new material that must be considered in a brief period. I think that the move away from traditional essays as the sole form of assessment might help to lessen (not, of course, halt) the impact of external input. From experience, this focus also helps students move towards application of new understanding, rather than a passive digestion of ideas.Prof Robert McColl MillarChair in linguistics and Scottish language, University of Aberdeen

Have an opinion on anything you’ve read in the Guardian today? Pleaseemailus your letter and it will be considered for publication in ourletterssection.

Back to Home
Source: The Guardian