this post was submitted on 16 Feb 2026
810 points (99.3% liked)
Technology
81286 readers
4803 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't know if you're saying this, so my apologies if I'm misunderstanding what you're saying, but this isn't principally ECC DIMMs that are being produced.
I suppose that a small portion of AI-related sales might go to ECC DDR5 DIMMs, because some of that hardware will probably use it, but what they're really going to be using in bulk is high-bandwidth-memory (HBM), which is going to be non-modular, connected directly to the parallel compute hardware.
I have been in a few discussions as to whether it might be possible to use, say, discarded PCIe-based H100s as swap (something for which there are existing, if imperfect, projects for Linux) or directly as main memory (which apparently there are projects to do with some older video cards using Linux's HMM, though there's a latency cost in that point due to needing to traverse the PCIe bus...it's going to be faster than swap, but still have some performance hit relative to a regular old DIMM, even if the throughput may be reasonable).
It's also possible that one could use the hardware as parallel compute hardware, I guess, but the power and cooling demands will probably be problematic for many home users.
In fact, there have been articles up as to how existing production has been getting converted to HBM production
there was an article up a while back about how a relatively-new factory that had been producing chips aimed at DDR4 had just been purchased and was being converted over by...it was either Samsung or SK Hynix...to making stuff suitable for HBM, which was faster than them building a whole new factory from scratch.
It's possible that there may be economies of scale that will reduce the price of future hardware, if AI-based demand is sustained (instead of just principally being part of a one-off buildout) and some fixed costs of memory chip production are mostly paid by AI users, where before users of DIMMs had to pay them. That'd, in the long run, let DIMMs be cheaper than they otherwise would be...but I don't think that financial gains for other users are principally going to be via just throwing secondhand memory from AI companies into their traditional, home systems.
Ah, thanks for the information. I was already aware most of it was going to GPU type hardware. I just naturally assumed all those gpus need servers with lots of ram.