this post was submitted on 20 Oct 2025
27 points (93.5% liked)

LocalLLaMA

3872 readers
14 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] mindbleach@sh.itjust.works 5 points 1 month ago (2 children)

This is the real future of neural networks. Trained on supercomputers - runs on a Game Boy. Even in comically large models, the majority of weights are negligible, and local video generation will eventually be taken for granted.

Probably after the crash. Let's not pretend that's far off. The big players in this industry have frankly silly expectations. Ballooning these projects to the largest sizes money can buy has been illustrative, but DeepSeek already proved LLMs can be dirt cheap. Video's more demanding... but what you get out of ten billion weights nowadays is drastically different from a six months ago. A year to date ago, video models barely existed. A year to date from now, the push toward training on less and running on less will presumably be a lot more pressing.

[–] ThorrJo@lemmy.sdf.org 2 points 1 month ago

I'm very interested in this approach because I'm heavily constrained by money. So I am gonna be looking (in non appliance contexts) to develop workflows where genAI can be useful when limited to small models running on constrained hardware. I suspect some creativity can yield useful tools with these limits, but I am just starting out.

[–] jwmgregory@lemmy.dbzer0.com 1 points 1 month ago* (last edited 1 month ago) (1 children)

The bubble popping will be a good thing. Henry Ford didn’t come around until after the electrification bubble popped, after all. Bezos didn’t come around until the dotcom bubble burst.

It’s after all bubbles burst - when the genuinely useful things are most salient and apparent, that the true innovations happen.

[–] mindbleach@sh.itjust.works 1 points 1 month ago

The bubble continuing ensures the current paradigm soldiers on, meaning hideously expensive projects shove local models into people's hands for free, because everyone else is doing that.

And once it bursts, there's gonna be an insulating layer of dipshits repeating "guess it was nothing!" over the next decade of incremental wizardry. For now, tolerating the techbro cult's grand promises of obvious bullshit means the unwashed masses are interpersonally receptive to cool things happening.

Already the big boys are pivoted toward efficiency instead of raw speed at all costs. The closer they get toward a toaster matching current tech with a model trained for five bucks, the better. I'd love for VCs to burn money on experimentation instead of scale.