this post was submitted on 18 Mar 2026
274 points (98.6% liked)

Microblog Memes

11172 readers
2888 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] TheTechnician27@lemmy.world 54 points 1 week ago (2 children)

What's wrong with the "Chihuahua meat" one besides violating Western mores about which sentient, feeling animals are food and which aren't?

[–] LodeMike@lemmy.today 15 points 1 week ago (2 children)

It's pointing out how it ignores one part of the question just because the question is normal/makes more sense without it.

[–] TheTechnician27@lemmy.world 23 points 1 week ago* (last edited 1 week ago) (1 children)

What's materially different if the question were: "Can I put cow meat in the microwave?". The LLM accurately reflects what the USDA says about microwaving meat, so would it be similarly perceived as ridiculous if its answer to the question about cow meat were the same as how it answered here? Is the fact that it dropped "cow" from "meat" problematic? Does it have to stop and warn you about the ethical dangers of eating beef? Should it remind you that some cultures would frown upon it?

[–] LodeMike@lemmy.today 3 points 1 week ago (1 children)

The ethos of chiuaua.

These things are just statistical text transformers so its interesting that it [presumably] doesn't mention it.

[–] Honytawk@feddit.nl -3 points 1 week ago

It wouldn't mention it with both chihuahua meat nor cow meat.

So why are you differentiating?

[–] Thedogdrinkscoffee@lemmy.ca 3 points 1 week ago

Like half of my bosses at work.

[–] Bytemeister@lemmy.world 2 points 1 week ago (1 children)

TBF, the question didn't say anything about eating the meat, or even cooking it.

The LLM just assumed that you were going to cook it in the microwave and eat it.

I'm having trouble coming up with a meat that would be unsafe to put in a microwave. Maybe poison dart frog meat?

[–] TheTechnician27@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (1 children)

Scenario 1: The LLM doesn't understand the obvious meaning of "can I put [meat] in the microwave?"

Haha, wow, what a broken piece of shit.

Scenario 2: The LLM understands this obvious meaning.

Um, ackschually, they didn't say they were going to use the microwave to cook the meat.

You've concocted a scenario where 1) a correct, human-like answer is wrong, and more importantly 2) any answer the LLM gives would be wrong. I hope I'm missing the sarcasm in this delusional level of pedantry.

[–] Bytemeister@lemmy.world 3 points 1 week ago

Yes, it was sarcastic pedantry.

[–] AffineConnection@lemmy.world 36 points 1 week ago

I know that the AI still makes blunders like this, but this is from 2023.

[–] village604@adultswim.fan 19 points 1 week ago* (last edited 1 week ago) (2 children)

I really have a hard time believing things like this since they could have just changed what was in the prompt text box.

But I have witnessed MS Copilot telling the user to use a Microsoft product that was retired a decade ago, and when that was pointed out it provided a Microsoft product that doesn't exist. Which is even more embarrassing for them.

[–] otacon239@lemmy.world 7 points 1 week ago (1 children)

You’d think the one thing they’d think to do is feed it a bunch of documentation so it could actually reference those, but they probably just have a really long prompt along the lines of “you’re a super helpful bot that knows everything and can figure out anything — never say no!”

[–] Passerby6497@lemmy.world 2 points 1 week ago

I'm assuming they have, but it was just links to the Microsoft help articles. And as we all know, every single one of those is a 404.

[–] Hudell@lemmy.dbzer0.com 3 points 1 week ago (1 children)

I really lost all hope when I saw someone tell chatgpt about an issue they were having with a certain npm package and the clanker said "ah yes that is an issue that was present oh version 2.1 of the package, it was fixed on version 2.2. I recommend you update it; here's the full changelog" and then provided a whole list of things that had been fixed on version 2.2 of the lib.

Except 2.1 was the latest version of that package and they hadn't even had any new commit since that version, nor any issues matching anything close to the described problem.

[–] Passerby6497@lemmy.world 3 points 1 week ago

The number of times that MicroSlop's own AI has given me links to MicroSlop's documentation that no longer exist is almost as high as the number of times I've had that happen on the MicroSlop help forums. Only, this time it doesn't have an ironic warning near other links talking about how non-MicroSlop links are unreliable and may disappear at any time.

[–] panda_abyss@lemmy.ca 11 points 1 week ago

This isn’t really fair

These aren’t SAT prep questions, how can you expect them to be answered?

[–] samus12345@sh.itjust.works 5 points 1 week ago

The first and fourth answers are correct - you can do both of those things. You didn't ask if you should.