this post was submitted on 25 Mar 2026
1556 points (99.2% liked)
Microblog Memes
11169 readers
2953 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI "retrieves" facts? Not my experience.
I was personally not able to reproduce this https://www.lesswrong.com/posts/52tYaGQgaEPvZaHTb/was-barack-obama-still-serving-as-president-in-december but it should still provide an illustration of what AI's ideas of retrieving facts looks like.
i recently got access to the paid version of Claude at my job. they wanted us to automate some routine tasks, fine. i had it make something, then asked how i could save it as a skill for future use. it said it doesn't have skills or macros. i said what, yes there are skills right there in the customize section. it came back with the usual "you're right! let me check... oh yes indeed there is such a function. my bad. here's more information from the web: ..."
like... oh my god. imagine if this were an unpaid intern. they would be immediately shot into the atmosphere. but instead we pay for this shit.
Yes, but not nearly as much. :(
Yes, such things can happen... I once asked an LLM a few questions about me (under my real name) that was publicly available on the Internet (i.e. should be in its training data). It answered a simple yes-or-no question wrongly. Then I asked it a followup question, which it answered more correctly, but the answer contradicted that wrong answer and it went "this seems to contradict my previous answer that...".
... Isn't less wrong where the zizians came out of
In my experience Microsoft Copilot is wildly inaccurate about facts describing aspects of Microsoft software products like Teams, or even Microsoft Copilot itself.
All AI does is generate plausible-sounding text. It doesn't care about whether it is true or false.
I am not generally anti-AI, nor generally pro-AI. There are good uses of AI and bad uses. For example I used AI to generate my profile picture here; the creation of art (as long as there is human review) is one of the best uses of AI I can think of...
But asking it for factual information and expecting it to be correct, and making decisions based on it? Anyone who does that deserves all negative consequences it can have.
AI is good for quickly generating "realistic enough" stat sheets for pen and paper campaigns. Not for actual research that effects people.