fruitcantfly

joined 2 years ago
[–] fruitcantfly@programming.dev 1 points 1 month ago

That bug does sound bad, but it is not clear to me how a BTRFS specific bug relates to it supposedly being more difficult to recover (or backup) when using whole-disk encryption with LUKS. It seems like an entirely orthogonal issue to me

[–] fruitcantfly@programming.dev 7 points 1 month ago (2 children)

What makes recovery and backup a nightmare to you?

I've been running full-disk encryption for many years at this point, and recovery in case of problems with the kernel, bootloader, or anything else that renders my system inoperable, is the same as before I started using full-disk encryption:

I boot up a live-CD and then fix the problem. The only added step is unlocking my encrypted drive(s), but these days that typically just involves clicking on the drive in the file manager, and then entering my password. I don't even have to drop into console for that.

I am also not sure why backups would be any different. Are you using something that images entire devices?

[–] fruitcantfly@programming.dev 3 points 1 month ago* (last edited 1 month ago) (1 children)

Astral clearly are using semantic versioning, as should be obvious if you read the spec you linked.

In fact, one of the examples listed in that spec is 1.0.0-alpha.1.

ETA: It should also be noted that ty is a Rust project, and follows the standards for versioning in that language: https://doc.rust-lang.org/cargo/reference/manifest.html#the-version-field

[–] fruitcantfly@programming.dev 11 points 1 month ago

That's not quite true: Yes, your $99 license is a life-time license, but that license only includes 3 years worth of updates. After that you have to pay $80, if you want another 3 years worth of updates. Of course, the alternative is just putting up with the occasional nag, which is why I still haven't gotten around to renewing my license

[–] fruitcantfly@programming.dev 10 points 1 month ago (1 children)

I’ve started converting my ‘master’ branches to ‘main’, due to the fact that my muscle-memory has decided that ‘main’ is the standard name. And I don’t have strong feelings either was

[–] fruitcantfly@programming.dev 10 points 1 month ago

No gods, no masters

[–] fruitcantfly@programming.dev 5 points 1 month ago

It’s unfortunate that it has come to this, since BCacheFS seems like a promising filesystem, but it is also wholly unsurprising: Kent Overstreet seemingly has an knack for driving away people who try to work with him

[–] fruitcantfly@programming.dev 6 points 1 month ago

For example, the dd problem that prompted all this noise is that uutils was enforcing the full block parameter in slow pipe writes while GNU was not.

So, now uutils matches GNU and the “bug” is gone.

No, the issue was a genuine bug:

The fullblock option is an input flag (iflag=fullblock) to ensure that dd will always read a full block's worth of data before writing it. Its absence means that dd only performs count reads and hence might read less than blocksize x count worth of data. That is according to the documentation for every other implementation I could find, with uutils currently lacking documentation, and there is nothing to suggest that dd might not write the data that it did read without fullblock.

Until recently it was also an extension to the POSIX standard, with none of tools that I am aware of behaving like uutils, but as of POSIX.1-2024 standard the option is described as follows (source):

iflags=fullblock
Perform as many reads as required to reach the full input block size or end of file, rather than acting on partial reads. If this operand is in effect, then the count= operand refers to the number of full input blocks rather than reads. The behavior is unspecified if iflags=fullblock is requested alongside the sync, block, or unblock conversions.

I can also not conceive of a situation in which you would want a program like dd to silent drop data in the middle of a stream, certainly not as the default behavior, so conditioning writes on this flag didn't make any sense in the first place

[–] fruitcantfly@programming.dev 8 points 2 months ago

This is interesting, but drawing conclusions from only two measurements is not reasonable. Especially so when the time-span measured is in the order of a few ms. For example, the two instances of clang might not be running at the same clock frequency, which could easily explain away the observed difference.

Plus, you could easily generate a very large number of functions, to increase the amount of work the compiler has to do. So I did just that (N = 10,000), using the function from the article, and used hyperfine to perform the actual benchmarking.

  • With int
    Benchmark 1: clang -o /dev/null test.cpp -c
      Time (mean ± σ):      1.243 s ±  0.018 s    [User: 1.192 s, System: 0.050 s]
      Range (min … max):    1.221 s …  1.284 s    10 runs
    
  • With auto
    Benchmark 1: clang -o /dev/null test.cpp -c
      Time (mean ± σ):      1.291 s ±  0.015 s    [User: 1.238 s, System: 0.051 s]
      Range (min … max):    1.274 s …  1.320 s    10 runs
    

So if you have a file with 10'000 simple functions with/without auto, then it increases your compile time by ~4%.

I'd worry more about the readability of auto, than about the compile time cost at that point

[–] fruitcantfly@programming.dev 4 points 2 months ago

Besides this change not breaking user space, the "don't break user space" rule has never meant that the kernel cannot drop support for file systems, devices, or even entire architectures

[–] fruitcantfly@programming.dev 6 points 3 months ago* (last edited 3 months ago) (2 children)

What you are describing is something I would label "skepticism of science", rather than "scientific skepticism".

So out of curiosity, I did a bit of digging. As andioop mentioned, the term "scientific skepticism" has been used to denote a scientifically minded skepticism for a long time. For example, the Wikipedia article on Scientific Skepticism dates back to 2004 and uses this meaning. Similarly the well known skeptic (pro-science/anti-pseudoscience) wiki, RationalWiki, has linked the scientific method and "scientific skepticism" as far back as 2011, and currently straight up equates skepticism with scientific skepticism. You can also find famous skeptics like Michael Shermer using the term back in the early 2000s, in his case in his 'The Skeptic Encyclopedia of Pseudoscience', published in 2002. It was also used in papers such as this sociology paper by Owen-Smith, 2001. This is the meaning of the term that I am familiar with.

However, since about 2020, there has been more of the term "scientific skepticism" as a parallel to "climate skepticism" and "vaccine skepticism". For example, this paper by Ponce de Leon et al is just one of many I could find via a quick Google Scholar search. This, I take it, is how you use the term.

Personally, I'm probably just gonna keep using "scientific skepticism" to mean "scientifically minded skepticism", but will keep in mind that it can also mean "skepticism of science"

[–] fruitcantfly@programming.dev 8 points 3 months ago (6 children)

Wouldn't scientists be the ones employing "scientific skepticism"?

view more: next ›