this post was submitted on 08 Feb 2026
210 points (98.2% liked)

Technology

80859 readers
2949 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FauxLiving@lemmy.world 7 points 1 day ago* (last edited 1 day ago) (1 children)

A bit flip, but this reads like people discovering that there is a hammer built specifically for NASA with specific metallurgical properties at the cost of $10,000 each where only 5 will ever be forged, because they were all intended to sit in a space ship in orbit around the Moon.

Then someone comes along and posts and article about a person who posted on Tumblr about how they were surprised that one was used to smash out a car window to steal a Door Dash order.


LLMs will always be vulnerable to prompt injection because of how they function. Maybe, at some point in the future, we'll understand enough about how LLMs represent knowledge internally so that we can craft specific subsystems to mitigate prompt injection... however, in 2026, that is just science fiction.

There are actual academic projects which are studying the boundaries of the prompt-injection vulnerabilities if you read in the machine learning/AI journals. These studies systemically study the problem, gather data and demonstrate their hypothesis.

One of the ways you can tell real Science from 'hey, I heard' science is that real science articles don't start with 'Person on social media posted that they found...'

This is a very interesting topic and if you're interested you can find the actual science by starting here: https://www.nature.com/natmachintell/.

[–] JackbyDev@programming.dev 17 points 1 day ago (1 children)

I wouldn't have necessarily thought it obvious Google Translate uses an LLM so this is still interesting.

[–] FauxLiving@lemmy.world -3 points 1 day ago* (last edited 1 day ago) (2 children)

In my testing, by copying the claimed 'prompt' from the article into Google Translate, it simply translated the command. You can try it yourself.

So, the source of everything that kicked off the entire article, is 'Some guy on Tumblr' vouching for an experiment, which we can all easily try and fail to replicate.

Seems like a huge waste of everyone's time. If someone is interested in LLMs, then consuming content like in the OP feels like knowledge but it often isn't grounded in reality or is framed in a very misleading manner.

On social media, AI is a topic that is heavily loaded with misinformation. Any claims that you read on social media about the topic should be treated with skepticism.

If you want to keep up on the topic, then read the academia. It's okay to read those papers even if if you don't understand all of it. If you want to deepen your knowledge on the subject, you could also watch some nice videos like 3Blue1Brown's playlist on Neural Networks: https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi. Or brush up on your math with places like Khan Academy (3Blue1Brown also has a good series on Linear Algebra if you want more concepts than calculations).

There's good knowledge out there, just not on Tumblr

[–] teft@piefed.social 3 points 22 hours ago

Google patches things like this very quickly. They have for decades. That’s probably why it doesn’t work for you since it’s been at least 8 hours since the original post.

[–] JackbyDev@programming.dev 2 points 23 hours ago* (last edited 23 hours ago)

In my testing, by copying the claimed 'prompt' from the article into Google Translate, it simply translated the command. You can try it yourself.

So, the source of everything that kicked off the entire article, is 'Some guy on Tumblr' vouching for an experiment, which we can all easily try and fail to replicate.

https://lemmy.world/comment/22022202