this post was submitted on 27 Feb 2026
105 points (100.0% liked)

Technology

42381 readers
182 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200's 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 3 points 3 days ago (1 children)

Facedeer is just a pro-AI concern troll from Reddit.

He kicked off his part of the thread by complaining about people, and then speculating that maybe this chip could do a thing without any evidence.

[–] MagicShel@lemmy.zip 2 points 3 days ago (1 children)

I'm middle of the road on AI. I think it has uses. I also think this technology is a dead end (i.e. this is not going to lead to AGI) and had people understood from the start the limitations of it, investment would've been more modest and cautious. Is a great technology. You can do cool things with it. But it will never be able to significantly replace humans. However it may be really painful watching the investor class wrestle with that reality.

I think the chip does have uses and I think building it even with today's models would last a long time. But the number of scenarios where it is unequivocally better than nothing is smaller than AI bros (I draw a line between an enthusiast like myself and a bro who is all in and won't hear reason) want to think.

Last point. In theory this chip is great. Based on my reading this is a substitute for an H100 — a data center GPU (APU?). This isn't going into smart mines or drones and probably not cars. Not without more development. So while there is potential here, none of these use cases are practical. This is a way for OAI or whomever to run their current models just the way they are for cheaper but with a hardware cost to upgrade. This isn't going to matter for the rest of us for a while.

[–] TehPers@beehaw.org 2 points 1 day ago

had people understood from the start the limitations of it, investment would've been more modest and cautious

People did understand from the start. Those who do the investing just didn't listen, or they had a different motive. These days it's impossible to tell which.

And by "people" I'm not referring to random people, but those who have been closer than most to the development of these models. There has been an unbelievable amount of research done on everything from the effectiveness of specific models in niche fields to the ability to use an LLM as the backend for a production service. Again, no amount of negative feedback going up the chain has made a difference in the direction, so that only leaves a few explanations on why the investment continues to be so high.