this post was submitted on 16 Oct 2025
77 points (98.7% liked)

Technology

40766 readers
296 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TranquilTurbulence@lemmy.zip 36 points 1 month ago (4 children)

Since basically all data is now contaminated, there’s no way to get massive amounts of clean data for training the next generation of LLMs. This should make it harder to develop them beyond the current level. If an LLMs wasn’t smart enough for you yet, there’s a pretty good chance that it won’t be in a long time.

[–] artifex@piefed.social 16 points 1 month ago (3 children)

Didn't Elon breathlessly explain how the plan was to have Grok rewrite and expand on the current corpus of knowledge so that the next Grok could be trained on that "superior" dataset, which would forever rid it of the wokeness?

[–] Naich@lemmings.world 13 points 1 month ago

It started calling itself MechaHitler after the first pass, so I'd be interested to see how less woke it could get training itself on that.

[–] Tollana1234567@lemmy.today 3 points 1 month ago (1 children)

trying to train it to be only a NAZI-LLM is difficult eventhough he lobotomized it a couple times.

[–] prex@aussie.zone 2 points 1 month ago (1 children)

It was a really entertaining moment in history to see grok showing up elon & co despite their clear attempts to make it conform to their world view.

[–] artifex@piefed.social 2 points 1 month ago

The common colloquialism is that objective reality has a liberal bias. So either you train your LLM on "woke" science and facts, or it spits out garbage nonsense that is obviously wrong to even the typical twitter user.

[–] TranquilTurbulence@lemmy.zip 1 points 1 month ago

That’s just musk talk. I’ll ignore the hype and decide based on the results instead.

[–] Xylight@lemdro.id 6 points 1 month ago* (last edited 1 month ago) (1 children)

A lot of LLMs now use intentionally synthesized, or AI generated training data. It doesn't seem to affect them too adversely.

[–] TranquilTurbulence@lemmy.zip 3 points 1 month ago

Interesting. In other models that was a serous problem.

[–] Tollana1234567@lemmy.today 4 points 1 month ago

law of diminishing returns, LLM train thier data on AI slop of LLM, that is trained other llm, all the way down to "normal human written slop"

[–] fascicle@leminal.space 3 points 1 month ago (1 children)

People will find a way somehow

[–] TranquilTurbulence@lemmy.zip 2 points 1 month ago* (last edited 1 month ago)

Oh I’m sure there is a way. We’ve already grabbed the low hanging fruit, but the next one is a lot higher. It’s there, but it requires some clever trickery and effort.