this post was submitted on 03 Jan 2026
352 points (91.7% liked)
Microblog Memes
10088 readers
1579 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.
I used to teach Software Dev at a university, and even at MSc level some of the submissions would have paled in comparison to even GPT3 output. That said, I didn't have to deal with the AI problem myself. I taught just before LLMs came into their own - Textsynth had just come out, and I used to use it as an example of how unintentional bias in training data shapes the outputs.
While I no longer teach, I do still work in that space. Ironically the best way to catch AI papers these days is with another AI. This is included in the plagiarism-checking software, and breaks down where it detects suspicious passages and why it thinks they're suspicious.
Human students, and non-students, were the training data set. The LLMs will never reach 94% accuracy to that even with infinite resources. The AI is always always always always going to be worse.