this post was submitted on 19 May 2025
1536 points (98.1% liked)

Microblog Memes

10209 readers
2582 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If a post is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Be nice. Take political debates to the appropriate communities. Take personal disagreements to private messages.
  7. No advertising, brand promotion, or guerrilla marketing.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Impleader@lemmy.world 24 points 8 months ago (3 children)

I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”

I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).

[–] takeda@lemm.ee 6 points 8 months ago

Sadly, the best use case for LLM is to pretend to be a human on social media and influence their opinion.

Musk accidentally showed that's what they are actually using AI for, by having Grok inject disinformation about South Africa.

[–] sneekee_snek_17@lemmy.world 5 points 8 months ago (1 children)

The only substantial uses i have for it are occasional blurbs of R code for charts, rewording a sentence, or finding a precise word when I can't think of it

[–] NielsBohron@lemmy.world 3 points 8 months ago* (last edited 8 months ago) (1 children)

It's decent at summarizing large blocks of text and pretty good for rewording things in a diplomatic/safe way. I used it the other day for work when I had to write a "staff appreciation" blurb and I couldn't come up with a reasonable way to take my 4 sentences of aggressively pro-union rhetoric and turn it into one sentence that comes off pro-union but not anti-capitalist (edit: it still needed a editing pass-through to put it in my own voice and add some details, but it definitely got me close to what I needed)

[–] sneekee_snek_17@lemmy.world 5 points 8 months ago (1 children)

I'd say it's good at things you don't need to be good

For assignments I'm consciously half-assing, or readings i don't have the time to thoroughly examine, sure, it's perfect

[–] NielsBohron@lemmy.world 4 points 8 months ago

exactly. For writing emails that will likely never be read by anyone in more than a cursory scan, for example. When I'm composing text, I can't turn off my fixation on finding the perfect wording, even when I know intellectually that "good enough is good enough." And "it's not great, but it gets the message across" is about the only strength of ChatGPT at this point.

[–] Honytawk@feddit.nl 0 points 8 months ago

Can you try again using an LLM search engine like perplexity.ai?

Then just click on the link next to the information so you can validate where they got that info from?

LLMs aren't to be trusted, but that was never the point of them.