GamingChairModel

joined 2 years ago
[–] [email protected] 2 points 4 days ago

Do you have a source for AMD chips being especially energy efficient?

I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC's N4P node, which doesn't have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can't get on a TSMC node even within a year or two of Apple.

Still, I haven't seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren't necessarily supported on all hardware or OSes, and it's hard to get a real comparison.

SoCs are inherently more energy efficient

I agree. But that's a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).

And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That's why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.

Ultimately, I'd think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.

[–] [email protected] 2 points 4 days ago (2 children)

it's kinda undeniable that this is where the market is going. It is far more energy efficient than an Intel or AMD x86 CPU and holds up just fine.

Is that actually true, when comparing node for node?

In the mobile and tablet space Apple's A series chips have always been a generation ahead of Qualcomm's Snapdragon chips in terms of performance per watt. Meanwhile, Samsung's Exynos has always been behind even more. That's obviously not an instruction set issue, since all 3 lines are on ARM.

Much of Apple's advantage has been a willingness to pay for early runs on each new TSMC node, and a willingness to dedicate a lot of square millimeters of silicon to their gigantic chips.

But when comparing node for node, last I checked AMD's lower power chips designed for laptop TDPs, have similar performance and power compared to the Apple chips on that same TSMC node.

[–] [email protected] 16 points 4 days ago (1 children)

The person who wrote it has been gone for like four years

Four years? You gotta pump those numbers up. Those are rookie numbers.

[–] [email protected] 4 points 5 days ago (1 children)

Rechargeable batteries weren't really a thing in the 70's. For consumer electrical devices, batteries were one use, and anything that plugged in needed to stay plugged in while in operation.

Big advances in battery chemistry made things like cordless phones feasible by the 80's, and all sorts of rechargeable devices in the 90's.

[–] [email protected] 2 points 5 days ago (1 children)

(latest version "Froyo")

This is Gingerbread erasure!

[–] [email protected] 28 points 1 week ago (1 children)

Sorry best I can do is a programmable turtle that moves around as a pen.

[–] [email protected] 7 points 2 weeks ago (1 children)

If the logic gates can feed back onto themselves, you can build a simple flip flop that can store a bit.

[–] [email protected] 1 points 2 weeks ago

Trick the algorithm by reporting all MMA content too.

[–] [email protected] 0 points 3 weeks ago (1 children)

What if I told you that there are really stupid comments on Lemmy as well

[–] [email protected] 2 points 1 month ago (1 children)

That's why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.

[–] [email protected] 33 points 1 month ago (3 children)

I'd argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950's, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.

We didn't call it spam then, but unsolicited phone calls have always been a problem.

[–] [email protected] 2 points 1 month ago

I hear it's amazing when the famous purple stuffed worm in flap-jaw space with the tuning fork does a raw blink on Hari Kiri Rock.

view more: next ›