this post was submitted on 19 Nov 2025
143 points (96.7% liked)

Technology

76917 readers
3295 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 11 points 19 hours ago* (last edited 19 hours ago) (5 children)

Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.

World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.

Sounds reasonable.

That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don't have a direct brain link to it. It's just that I don't expect an AGI to be an LLM.

EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It's not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.

I do think that if you're a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it's not AGI. Doesn't make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres

Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

So Meta probably cannot only be doing AGI work.

[–] tomiant@piefed.social 13 points 19 hours ago (1 children)

Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There's more, but I got interrupted by a drunk guy who needs my attention, I'll be back.

[–] krooklochurm@lemmy.ca 6 points 13 hours ago (1 children)

WHAT HAPPENED WITH THE DRHNK DUDE?

[–] tomiant@piefed.social 1 points 1 hour ago

He offered me a job.

[–] just_another_person@lemmy.world 7 points 19 hours ago (2 children)

LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

The system he's talking about is more about using NNL, which builds new relationships to things that persist. It's deferential relationship learning and data path building. Doesn't exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

[–] avidamoeba@lemmy.ca 5 points 18 hours ago

I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn't agree with the approach.

[–] UnderpantsWeevil@lemmy.world 4 points 18 hours ago

Sounds reasonable.

Does it, though? Feels like we're just rewriting the sales manual without thinking about what "learning from video" would actually entail.

Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

There's an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you've got a shot at new one in the near future.

What these Tech Companies are chasing is the Next Big Thing, even when they don't really understand what that is. And they're so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people's money) on whatever it is they think that might be.

The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.

[–] chrash0@lemmy.world 1 points 16 hours ago

he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality