this post was submitted on 24 May 2025
112 points (97.5% liked)
Linux
7659 readers
272 users here now
A community for everything relating to the GNU/Linux operating system
Also check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The information will be evenly distributed upon its surface and some believe one day it will be be radiated back out into the rest of the system.
That's a horrifying concept. Better not think about it.
while :; do cat /proc/sys/kernel/random/uuid > /dev/null; done
edit: on all cores for maximum "efficiency"
no.
That reminds me of the CPU stress test I ran many years ago.
dd if=/dev/random of=/dev/null
If you have 8 cores, just open 8 terminals, and run that code in each of them.
/dev/urandom should stress the CPU more. /dev/random can be entropy limited
Oh yeah. This looks like a much better way to do it. My solution is pretty bare bones by comparison.
the advantage of yours is that you can actually see the performance number afterwards.
Can you guarantee that each process will run on its own core?
Absolutely not, quite the opposite actually. However, the end result is close to 100% CPU load, which is good enough for some purposes. Let’s say you want to test the performance of your CPU cooler, or overclock stability, this should good enough. There are also dedicated tools for people with more advanced needs.