this post was submitted on 15 Dec 2025
721 points (98.5% liked)

Technology

77742 readers
3709 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Rhaedas@fedia.io 37 points 2 days ago

I'm going to take this from a different angle. These companies have over the years scraped everything they could get their hands on to build their models, and given the volume, most of that is unlikely to have been vetted well, if at all. So they've been poisoning the LLMs themselves in the rush to get the best thing out there before others do, and that's why we get the shit we get in the middle of some amazing achievements. The very fact that they've been growing these models not with cultivation principles but with guardrails says everything about the core source's tainted condition.

[–] absGeekNZ@lemmy.nz 17 points 2 days ago

So if someone was to hypothetically label an image in a blog or a article; as something other than what it is?

Or maybe label an image that appears twice as two similar but different things, such as a screwdriver and an awl.

Do they have a specific labeling schema that they use; or is it any text associated with the image?

[–] DarkSideOfTheMoon@lemmy.world 0 points 20 hours ago

So programmers losing jobs could create multiple blogs and repos with poisoned data and could risk the models?

[–] Hackworth@piefed.ca 16 points 2 days ago

There's a lot of research around this. So, LLM's go through phase transitions when they reach the thresholds described in Multispin Physics of AI Tipping Points and Hallucinations. That's more about predicting the transitions between helpful and hallucination within regular prompting contexts. But we see similar phase transitions between roles and behaviors in fine-tuning presented in Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.

This may be related to attractor states that we're starting to catalog in the LLM's latent/semantic space. It seems like the underlying topology contains semi-stable "roles" (attractors) that the LLM generations fall into (or are pushed into in the case of the previous papers).

Unveiling Attractor Cycles in Large Language Models

Mapping Claude's Spirtual Bliss Attractor

The math is all beyond me, but as I understand it, some of these attractors are stable across models and languages. We do, at least, know that there are some shared dynamics that arise from the nature of compressing and communicating information.

Emergence of Zipf's law in the evolution of communication

But the specific topology of each model is likely some combination of the emergent properties of information/entropy laws, the transformer architecture itself, language similarities, and the similarities in training data sets.

[–] Fandangalo@lemmy.world 8 points 2 days ago

Garbage in, garbage out.

load more comments
view more: ‹ prev next ›