this post was submitted on 18 May 2025
137 points (99.3% liked)

movies

3896 readers
534 users here now

Matrix room: https://matrix.to/#/#fediversefilms:matrix.org

Warning: If the community is empty, make sure you have "English" selected in your languages in your account settings.

🔎 Find discussion threads

A community focused on discussions on movies. Besides usual movie news, the following threads are welcome

Related communities:

Show communities:

Discussion communities:

RULES

Spoilers are strictly forbidden in post titles.

Posts soliciting spoilers (endings, plot elements, twists, etc.) should contain [spoilers] in their title. Comments in these posts do not need to be hidden in spoiler MarkDown if they pertain to the title’s subject matter.

Otherwise, spoilers but must be contained in MarkDown.

2024 discussion threads

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] xyzzy@lemm.ee 11 points 2 days ago* (last edited 2 days ago) (1 children)

this is currently the worst it's going to be

Yes, this is a favorite line from the industry, who assume the trend line continues uninterrupted into the future. But how about this as a counter future: what if AI plateaus?

What if it doesn't get much better than it already is except around the edges, and the next breakthrough is two decades away? Companies have exhausted training data and exhausted data center capacity in the quest to keep the trend line at the previous vector. Yes, they're building new capacity, but no one is making any money on this except Nvidia.

LLMs haven't seen any significant improvement in a couple years. Image generation has improved, but at a much slower pace. Video is no longer Will Smith eating spaghetti, but there's a long, long valley between where we are today and convincing, photorealistic, extended scenes that can be controlled at a fine level. Hence the challenge I posed.