this post was submitted on 07 May 2026
238 points (85.6% liked)
Technology
84478 readers
3851 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I'm made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
To add to this, we already know that context switching causes a loss in performance.
A person who's thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.
https://medium.com/@codewithmunyao/the-hidden-cost-of-context-switching-why-your-most-productive-hours-are-disappearing-43c5b501de19
Here's another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/
What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.
This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.
The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.
Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.
Are we sure this was written by a human?
AI being released was basically an apocalypse for people who use EM dash.
Here's the most cited, human created (2001), paper on the topic of context switching performance loss: https://www.apa.org/pubs/journals/releases/xhp274763.pdf
Thanks.
And I'm all for em dashes. After all, I started using them after reading enough books. It's just that particular construct that strikes me as especially LLM-y.
AI was trained on human writing. If it produces a certain tone, then that's probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.
What makes it stick out is when AI uses it in contexts where humans normally wouldn't, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you're talking about.
So I don't think this is an LLM-construct; it's an instance of the original style that LLMs copy.
True, but in my experience most people use a comma, not an em dash.
I'd like to see a study on that, I see it mentioned so much it's almost achieved meme status.
It could very well be a Baader–(👀)Meinhof phenomenon.
That medium post is 100% LLM output.
100%, shit test