FizzyOrange

joined 2 years ago
[–] FizzyOrange@programming.dev 2 points 1 week ago

Ah yeah I agree. Misread your comment.

[–] FizzyOrange@programming.dev 1 points 1 week ago (3 children)

I disagree. You can write a lot of high quality Python code (yeah it exists) before you need to use inheritance. If you're reaching for inheritance as the solution to all complexity, GoF-style, then you're doing it wrong.

It's an occasionally useful tool that has its place, not something you should instinctively reach for.

[–] FizzyOrange@programming.dev 23 points 1 week ago

WebP was the first widely supported format to support lossy transparency. It's worth it for that alone.

[–] FizzyOrange@programming.dev 1 points 1 week ago

It does kind of feel like they could just set up a Signal account?

[–] FizzyOrange@programming.dev 2 points 1 week ago

They mean measure first, then optimize.

This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.

Really there are two things that can happen:

  1. You are trying to optimise performance. In this case you obviously measure using a profiler because that's by far the easiest way to find places that are slow in a program. It's not the only way though! This only really works for micro optimisations - you can't profile your way to architectural improvements. Nicholas Nethercote's posts about speeding up the Rust compiler are a great example of this.

  2. Writing new code. Almost nobody measures code while they're writing it. At best you'll have a CI benchmark (the Rust compiler has this). But while you're actually writing the code it's mostly find just to use your intuition. Preallocate vectors. Don't write O(N^2) code. Use HashSet etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don't need to constantly second guess yourself.

[–] FizzyOrange@programming.dev 3 points 1 week ago

Do you realize how old assembly language is?

Do you? These instructions were created in 2011.

It predates hard disks by ten years and coincided with the invention of the transistor.

I'm not sure what the very first assembly language has to do with RISC-V assembly?

[–] FizzyOrange@programming.dev 3 points 1 week ago

flawed tests are worse than no tests

I never said you should use flawed tests. You ask AI to write some tests. You READ THEM and probably tweak them a little. You think "this test is basic but better than nothing and it took me 30 seconds. You commit it.

[–] FizzyOrange@programming.dev 3 points 1 week ago

It absolutely is a challenge. Before AI there weren't any other systems that could do crappy automated testing.

I dunno what you mean by "it's not AI". You write the tests using AI. It's AI.

[–] FizzyOrange@programming.dev 4 points 1 week ago (6 children)

AI is good at more than just generating stubs, filling in enum fields, etc. I wouldn't say it's good at stuff beyond just "boilerplate" - it's good at stuff that is not difficult but also isn't so regular that it's possible to automate using traditional tools like IDEs.

Writing tests is a good example. It's not great at writing tests, but it is definitely better than the average developer when you take the probability of them writing tests in the first place into account.

Another example would be writing good error context messages (e.g. .with_context() in Rust). Again, I could write better ones than it does. But like most developers there's a pretty high chance that I won't bother at all. You also can't automate this with an IDE.

I'm not saying you have to use AI, but if you don't you're pointlessly slowing yourself down. That probably won't matter to lots of people - I mean I still see people wasting time searching for symbols instead of just using a proper IDE with go-to-definition.

[–] FizzyOrange@programming.dev 9 points 1 week ago* (last edited 1 week ago) (3 children)

Assembly is very simple (at least RISC-V assembly is which I mostly work with) but also very tedious to read. It doesn't help that the people who choose the instruction mnemonics have extremely poor taste - e.g. lb, lh, lw, ld instead of load8, load16, load32, load64. Or j instead of jump. Who needs to save characters that much?

The over-abbreviation is some kind of weird flaw that hardware guys all have. I wondered if it comes from labelling pins on PCB silkscreens (MISO, CLK etc)... Or maybe they just have bad taste.

I once worked on a chip that had nested acronyms.

[–] FizzyOrange@programming.dev 41 points 1 week ago

I don't think that's a surprise to anyone that has actually used them for more than a few seconds.

[–] FizzyOrange@programming.dev 1 points 1 week ago

The evidence is that I have tried writing Python/JavaScript with/without type hints and the difference was so stark that there's really no doubt in my mind.

You can say "well I don't believe you".. in which case I'd encourage you to try it yourself (using a proper IDE and use Pyright; not Mypy)... But you can equally say "well I don't believe you" to scientific studies so it's not fundamentally different. There are plenty of scientific studies I don't believe and didn't believe (e.g. power poses).

view more: ‹ prev next ›