this post was submitted on 02 Jan 2026
521 points (98.2% liked)
Memes
13700 readers
1304 users here now
Post memes here.
A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.
An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.
- Wait at least 2 months before reposting
- No explicitly political content (about political figures, political events, elections and so on), !politicalmemes@lemmy.ca can be better place for that
- Use NSFW marking accordingly
Laittakaa meemejä tänne.
- Odota ainakin 2 kuukautta ennen meemin postaamista uudelleen
- Ei selkeän poliittista sisältöä (poliitikoista, poliittisista tapahtumista, vaaleista jne) parempi paikka esim. !politicalmemes@lemmy.ca
- Merkitse K18-sisältö tarpeen mukaan
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"figure it out" is not technically wrong but it's worse than "it depends" so it's definitely very unhelpful. Which is kind of the opposite of what an LLM should be.
I'm only playing devils advocate. I'm not a fan of LLMs myself.
You need to know how to use the tool to get the correct output. In this case, it's giving you a literal answer. Craft your question in a way so that it will give you what you're looking for. Look up "prompt engineer" for a more thorough answer. It's how we thought LLMs were going to be to begin with.
Though the phrase "prompt engineer" is so funny. Has literally nothing to do with engineering at all. Like having a PhD in "Google Search" 🤣
I guess it's like social engineering, but for LLMs
Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.
It's like crime: it should be safe everywhere anytime because of police and laws, but since it's not, you can't go everywhere anytime. That's not on you, but you have to deal with it.
Different language models input prompts differently.
Other models will input your question, break it down internally, figure out what you're really asking and then spit out an answer. It still might not be right but it will give you a better answer than that.
We're ragging on a wish.com version of a model and it's response.