I commend your reporting of the AI scandal in UK universities (Revealed: Thousands of UK university students caught cheating using AI, 15 June), but “tip of the iceberg” is an understatement. While freedom of information requests inform about the universities that are catching AI cheating, the universities that are not doing so are the real problem.
In 2023, a widely used assessment platform, Turnitin, released an AI indicator,reporting high reliability from huge-sample tests. However, many universities opted out of this indicator, without testing it. Noise about high “false positives” circulated, but independent research has debunked these concerns (Weber-Wulff et al 2023;Walters 2023;Perkins et al, 2024).
The real motivation may be that institutions relying on high-fee-paying international cohorts would rather not know; the motto is “see no cheating, hear no cheating, lose no revenue”. The political economy of higher education is driving a scandal of unreliable degree-awarding and the deskilling of graduates on a mass scale. Institutions that are biting the bullet, like mine, will struggle with the costs of running rigorous assessments, but know the costs of not doing so will be far greater.
If our pilots couldn’t fly planes themselves or our surgeons didn’t know our arses from our elbows, we’d be worried – but we surely want our lawyers, teachers, engineers, nurses, accountants, social workers etc to have real knowledge and skills too.
A sector sea change is under way, with someinstitutions publicly adopting proper exams(maligned as old-fashioned, rote-learning, unrealistic etc) that test what students can actually do themselves. Institutions that are resistant to ripping off the plaster of convenient yet compromised assessments will, I’ll wager, have to some day explain themselves to a public inquiry.Dr Craig ReevesBirkbeck, University of London
Have an opinion on anything you’ve read in the Guardian today? Pleaseemailus your letter and it will be considered for publication in ourletterssection.