this post was submitted on 15 Jun 2025
304 points (98.4% liked)

World News

47507 readers
2209 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

you are viewing a single comment's thread
view the rest of the comments
[–] Scubus@sh.itjust.works 3 points 11 hours ago (1 children)

The article does not state that. It does, however, mention that AI detection tools were used, and that they failed to detect AI writing 90 something % of the time. It seems extremely likely they used ai detection software.

[–] practisevoodoo@lemmy.world 2 points 3 hours ago

I'm saying this a someone that has worked for multiple institutions, raised hundreds of conduct cases and has more on the horizon.

The article says proven cases. Which means that the academic conduct case was not just raised but upheld. AI detection may have been used (there is a distinct lack of concencus between institutions on that) but would not be the only piece of evidence. Much like the use of Turnitin for plagiarism detection, it is an indication for further investigation but a case would not be raised based solely on a high tii score.

There are variations in process between institutions and they are changing their processes year on year in direct response to AI cheating. But being upheld would mean that there was direct evidence (prompt left in text), they admitted it in (I didn't know I wasn't allowed to, yes but I only, etc) and/or there was a viva and based on discussion with the student it was clear that they did not know the material.

It is worth mentioning that in a viva it is normally abundantly clear if a given student did/didn't write the material. When it is not clear, then (based on the institutions I have experience with) universities are very cautious and will give the students the benefit of the doubt (hence tip of iceberg).