LeFantome

joined 2 years ago
[–] LeFantome@programming.dev 1 points 9 months ago

Valkey is already better

[–] LeFantome@programming.dev 8 points 9 months ago* (last edited 9 months ago)

Many of us would not be able to declare our love for the US—certainly not convincingly. Being out into a position to do so could lead to an escalation.

This is exactly the kind of risk that some of us hope to avoid. It is totally wrong to say that there is no risk even if you “keep your shit together”.

[–] LeFantome@programming.dev 35 points 9 months ago

The entire point of a Memorial Day or a Remembrance Day is to honour the sacrifice and to lament that it was necessary. Simpleton Trump thinks it should be about celebrating victory. It is hard not to feel bad for somebody that is so incomplete as a human being.

[–] LeFantome@programming.dev 6 points 9 months ago (1 children)

Trump is hoping to have a giant military parade to commemorate The Great Patriotic War. I am surprised that he is not celebrating it on May 9th.

[–] LeFantome@programming.dev -2 points 9 months ago (1 children)

MIT - only good for tiny weekend projects like Xorg, Wayland, Mesa, Godot, Jenkins, MUSL, Node.js, Angular, Vue.js, React, Rust, Julia, F#, Rails, PyPy, Redox, and the Haiku Operating System.

AGPL - good for serious projects that you want to be super successful. Widely used software that started off as AGPL includes………. uhh……..wait…….ummm……. lemmy and Mastadon I guess?

[–] LeFantome@programming.dev 4 points 9 months ago

But then they would lose access to his AI model that is worse than the AI models they already have.

[–] LeFantome@programming.dev 2 points 9 months ago

The hardware being emulated here is the CPU.

[–] LeFantome@programming.dev 2 points 9 months ago

That was a fascinating read. Thank you. I suppose it is possible that there could be a RISC-V extension for this.

As the article states though, this is an ancient x86 artifact not often used in modern x86-64 software. If the code generated by GCC and Clang does not create such code, it may not exist in Linux binaries in practice. Or perhaps the decider is if such code is found inside Glibc or MUSL.

As this is a JIT and not an AOT compiler, you cannot optimize away unused flags. I expect the default behaviour is going to be not to set these flags on the assumption that modern software will not use them. If you just skip these flags, the generated RISC-V code stays fast.

You could have a command-line switch that enables this flag behaviour for software that needs it (with a big performance hit). This switch could take advantage of the RISC-V extension, if it exists, to speed things up.

Outside of this niche functionally though, it seems that the x86-64 instructions are mapping to RISC-V well. The extensions actually needed for good performance are things like the vector extension and binary bit manipulation.

Linux benefits from a different kind of integration. The article states that Apple is able to pull-off this optimization because they create both the translation software and the silicon. But the reason they need to worry about these obscure instructions is because the x86-64 software targeting macOS arrives as compiled binaries built using who knows what technology or toolchain. The application bundle model for macOS applications encourages ISVs to bundle in whatever crazy dependencies they like. This could include hand-rolled assembler you wrote decades ago. To achieve reliable results, Apple has to emulate these corner cases.

On Linux, the model is that everybody uses the same small subset of compilers to dynamically link against the same c runtimes and supporting libraries (things like OpenSSL or FreeTyoe). Even though we distribute Linux binaries, they are typically built from source that is portable across multiple architectures.

If GCC and Glibc or Clang and MUSL do not output certain instructions, a Linux x86-64 emulator can assume a happy path that does not bother emulating them either.

Ironically, a weakness in my assumptions here could be games. What happens when the x86-64 code we want to emulate is actually Windows code running on Linux. Now we are back to not knowing what crazy toolchain was used to generate the x86-64 and what instructions and behaviour it may depend on.

[–] LeFantome@programming.dev 2 points 9 months ago

This is a Linux x86-64 to Linux RISC-V emulator. It will not execute non-Linux code or execute code outside Linux.

The Linux system call interface is the same on both sides so, when it encounters a Linux system call in the x86-64 code, it makes that call directly into the RISC-V host kernel. It is only emulating the user space. This makes it faster.

[–] LeFantome@programming.dev 1 points 9 months ago* (last edited 9 months ago)

In Java or .NET, the JIT is still going from a higher level abstraction to a lower one. You JIT from CIL (common intermediate language) or Java Bytecode down to native machine code.

When you convert from a high level language to a low level language, we call it compiling.

Here, you are translating the native machine code of one architecture to the native machine code of another (x86-64 to RISC-V).

When you run code designed for one platform on another platform, we call it emulation.

JIT means Just-in-Time which just means it happens when you “execute” the code instead of Ahead-of-Time.

In .NET, you have a JIT compiler. Here, you have a JIT emulator.

A JIT is faster than an interpreter. Modern web browsers JIT JavaScript to make it faster.

[–] LeFantome@programming.dev 18 points 9 months ago (1 children)

I would rather see them fail in the open market. Things are going well.

view more: ‹ prev next ›