VISIVE.AI

Universities Struggle with AI Cheating: No Easy Fix

Universities face a complex challenge as AI tools like ChatGPT make it difficult to detect student cheating. Solutions are not straightforward.

Jun 23, 2025Source: Visive.ai
Universities Struggle with AI Cheating: No Easy Fix

The use of AI tools like ChatGPT in higher education has sparked significant debate. Lecturers and policymakers are grappling with the issue of AI-generated content, which can be hard to detect and even harder to prevent.

The Complexity of AI Detection

Josh Freeman, a policy manager at the Higher Education Policy Institute, argues that the problem is more nuanced than a simple lack of will. He points out that AI detectors, such as those used by Turnitin, are often inaccurate. A study by Perkins et al. (2024) found that AI detectors were accurate in fewer than 40% of cases, and this accuracy dropped to just 22% in adversarial scenarios where AI use was deliberately obscured.

"The last study he cites shows that AI detectors were accurate in fewer than 40% of cases, and that this fell to just 22% of 'adversarial' cases – when the use of AI was deliberately obscured. In other words, AI detectors failed to spot that AI had been used three-quarters of the time," Freeman explains.

Shifting Assessment Methods

As a result, universities are exploring alternative assessment methods. Some are reverting to in-person exams, which are harder to cheat in. Others are designing assessments that assume students will use AI, focusing on tasks that require critical thinking and originality.

Prof Paul Johnson from the University of Chester emphasizes the need for careful assessment design. "We need to think carefully about how we are going to assess work, when at a click almost limitless superficially plausible text can be produced," he states.

Balancing Tradition and Innovation

Prof Robert McColl Millar from the University of Aberdeen suggests a move towards more analytical assessments. "I would call for a move towards more analytical assessment, where students are faced with new material that must be considered in a brief period. This focus also helps students move towards application of new understanding, rather than a passive digestion of ideas," he says.

The Financial Impact

The financial pressure on universities adds another layer of complexity. Many institutions rely heavily on international student fees, and the fear of losing revenue may influence their approach to AI cheating. Freeman notes that over two-fifths of UK universities will be in deficit by the end of this academic year, making it difficult to implement costly new measures.

"But it is untrue that universities could simply spot AI cheating if they wanted to. Dr Reeves says that they should use AI detectors, but the studies that he quotes rebut this argument," Freeman adds.

A Call for Responsible Solutions

The challenge of AI in higher education is not just about technology but also about ethics and responsibility. As universities navigate this complex landscape, they must find solutions that balance academic integrity, student learning, and financial sustainability.

In the meantime, the conversation continues, with experts and educators working to find the right balance between tradition and innovation in assessment methods.

Frequently Asked Questions

Why is AI detection difficult in universities?

AI detectors often have low accuracy, especially in adversarial scenarios where AI use is deliberately obscured. This makes it challenging to detect AI-generated content reliably.

What alternative assessment methods are universities using?

Some universities are reverting to in-person exams, while others are designing assessments that assume students will use AI, focusing on tasks that require critical thinking and originality.

How does financial pressure affect universities' approach to AI cheating?

Many universities rely on international student fees, and the fear of losing revenue may influence their approach to implementing costly new measures to detect AI cheating.

What ethical considerations are involved in using AI in education?

The use of AI in education raises ethical questions about academic integrity, student learning, and the responsible use of technology in assessment and teaching.

What is the role of analytical assessments in addressing AI cheating?

Analytical assessments, which require students to consider new material in a brief period, can help move students towards application of new understanding and reduce the impact of external input.

Related News Articles

Image for LLMs Revolutionize Robotics: A Cost-Effective Language Interface

LLMs Revolutionize Robotics: A Cost-Effective Language Interface

Read Article →
Image for Gulf Nations Pledge $2 Trillion in AI Race

Gulf Nations Pledge $2 Trillion in AI Race

Read Article →
Image for AI and Nuclear Weapons: A Balanced Approach to Risk and Benefit

AI and Nuclear Weapons: A Balanced Approach to Risk and Benefit

Read Article →
Image for BigBear.ai vs. C3.ai: Which AI Stock is the Better Bet?

BigBear.ai vs. C3.ai: Which AI Stock is the Better Bet?

Read Article →
Image for AI Revolutionizing Restaurants: Deloitte Report Reveals Major Investments

AI Revolutionizing Restaurants: Deloitte Report Reveals Major Investments

Read Article →
Image for University of Alberta Tops Canadian AI Rankings, 53rd Globally

University of Alberta Tops Canadian AI Rankings, 53rd Globally

Read Article →