this post was submitted on 09 Apr 2025
2 points (60.0% liked)

Hardware

1585 readers
48 users here now

All things related to technology hardware, with a focus on computing hardware.


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Some other hardware communities across Lemmy:

Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 9 points 1 week ago (1 children)

There are numerous ways to measure AI throughput, making it difficult to compare chips. Google is using FP8 precision as its benchmark for the new TPU, but it's comparing it to some systems, like the El Capitan supercomputer, that don't support FP8 in hardware. So you should take its claim that Ironwood "pods" are 24 times faster than comparable segments of the world's most powerful supercomputer with a grain of salt.

This is not a grain of salt. This is premeditated lying.

Honestly the whole article reminds of this:

https://www.youtube.com/watch?v=GFRzIOna2oQ

[โ€“] [email protected] 4 points 1 week ago* (last edited 1 week ago)

Especially given that these supercomputers not supporting FP8 instead support AVX512. And I'm almost positive using INT8 or INT16 you can achieve quite good results populating these vectors