Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
Isn't that what almost anyone would do?
I've seen some pretty silly AI blunders before, but this one seems rather harmless. You're still going to end up at the setting you need to change to solve the problem, which to me falls squarely in the "close enough" bin.
the problem is two fold and this applies to all LLMs.
according to the article the person asked HOW to increase the font size. Copilot then tells Aura to go increase the font size so it's not exactly what the person asked.
like all LLMs it MUST provide a solution. it MUST do something. Copilot asks Aura to increase the font size to 150%. it's already at 150%. but instead of going back to the user and saying "the font size is already set to 150%, would you like to increase it more?" it just goes ahead and bumps it up to 200 because it has to. it has no choice, it must provide some kind of solution. LLMs like GPT5, Claude, etc will do similar crap which ultimately shows that recent models are all collectively garbage. they now must ALL provide some sort of solution even if the majority of the time said solutions are actually hallucinations. LLMs aren't allowed to say "I don't know" or "it's already done" or whatever.
So this video/article just shows how pointless and unreliable LLMs/AI are at even the most basic things. They've all been told that they must provide some kind of solution, always. It's especially bad now with recent updates. Claude will hallucinate 8 times out of 10 for its solutions. GPT5 now just info dumps on you hoping something in there will resonate, it can't provide an accurate precise solution anymore it just vomits on your plate and calls it a meal.
Hold up,
No it did not. Copilot doesn't click or change anything in this video, that's the user clicking 200% after copilot told him to make it 150%.
The only actual flaws here is Copilot not seeing the size was already set to 150%
sorry I just went off the article and didn't watch the video.