Wow, this looks extremely promising.
https://sylve.io/guides/node/settings/authentication/users/
I would want RBAC, and single sign on tied into that, but they do seem to be aware and working to add it.
Wow, this looks extremely promising.
https://sylve.io/guides/node/settings/authentication/users/
I would want RBAC, and single sign on tied into that, but they do seem to be aware and working to add it.
It's complicated. There's a lot of context to this, and even the debate in general.
One big problem is that there's a lot of money in this. If you "prove" something is real, and pretend it's a novel discovery, then you can try to sell a novel product that capitalizes off of that.
For example, there used to be a big trend in education, "evidence based learning". https://en.wikipedia.org/wiki/Evidence-based_education . The idea was science would be used to discover the best ways to learn/teach.
The problem was that the method of implementation would be software, or trainings. That you buy...
This reddit thread is a snapshot of the anger and frustration from that: https://www.reddit.com/r/Teachers/comments/jj6tvx/im_done_with_evidencebased_educational_research/
And of course, much of it was debunked later. Like learning styles, for example, were debunked. Although there was some good stuff, like spaced repetition, for which there is a FOSS app called Anki.
Psychology is kinda the same. People do science to try to back products, or trainings, which are then sold.
The inability to replicate these studies is ultimately not a failure, but a success. Science is still doing it's job.
It looks real. I am interested in the way they sandbox wordpress extensions, which have been cause of a lot of vulnerabilities, but I am wondering how they sandbox extensions that want more privileged access, like those that replace the content editing and site rendering features.
Well, more like was interested. From their github.
EmDash depends on Dynamic Workers to run secure sandboxed plugins. Dynamic Workers are currently only available on paid accounts. Upgrade your account (starting at $5/mo) or comment out the worker_loaders block of your wrangler.jsonc configuration file to disable plugins.
Okay, I hath returned. Here is what I am doing with FLuxCD and it's method of installing helm charts:
Okay, I'm cheating. :/ . I'm using Flux's method where you can have a secret that has values, and then I'm just including those.
But yeah, using an ENV var that pulls from a secret is probably better.
a grand scale with the XZ backdoor
The XZ backdoor, affected a lot less machines than you think. It did not affect:
The malicious code never made it into RHEL or Debian. Both of those distros have a model of freezing packages at a specific version. They then only push manually reviewed security updates, ignoring feature updates or bugfixes to the programs they are packaging. This ensures maximum stability for enterprise usecases, but the way that the changes are small and reviawable also causes them to dodge supply chain attacks like xz (it also enables these distros to have stable auto update features, which I will mention later). But those distros make up a HUGE family of enterprise Linux machines, that were simply untouched by this supply chain attack.
As for linux distros that don't integrate ssh with systemd or non systemd distros being affected, that was because the malware was inactive in those scenarios. Malicious code did make it there, but it didn't activate. I wonder if that was sloppiness on the part of the maker of the malware, or intentional, having it activate less frequently as a way of avoiding detection?
Regardless, comparing the XZ backdoor to the recent NPM and other programming language specific package manager supply chain attacks is a huge false analogy. They aren't comparable at all. Enterprise Linux distros have excellent supply chain security, whereas programming language package managers have basically none. To copy from another comment of mine about them:
Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.
Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.
So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.
And that’s why supply chain attacks are now a daily occurrence.
And then this:
You should probably turn off Dependabot. In my experience, we get more problems from automatic updates than we would by staying on the old versions until needed.
Also drives me insane as well. It's a form of survivorship bias, where people only notice when automatic upgrades cause problems, but they completely ignore the way that automatic security upgrades prevent many issues. Nobody cares about some organization NOT getting ransomwared because their webserver was automatically patched. That doesn't make the news the way that auto upgrades breaking things does. To copy from yet another comment of mine
If your software updates between stable releases break, the root cause is the vendor, rather than auto updating. There exist many projects that manage to auto update without causing problems. For example, Debian doesn't even do features or bugfixes, but only updates apps with security patches for maximum compatibility.
Crowdstrike auto updating also had issues on Linux, even before the big windows bsod incident.
https://www.neowin.net/news/crowdstrike-broke-debian-and-rocky-linux-months-ago-but-no-one-noticed/
It's not the fault of the auto update process, but instead the lack of QA at crowdstrike. And it's the responsibility of the system administrators to vet their software vendors and ensure the models in use don't cause issues like this. Thousands of orgs were happily using Debian/Rocky/RHEL with autoupdates, because those distros have a model of minimal feature/bugfixes and only security patches, ensuring no fuss security auto updates for around a decade for each stable release that had already had it's software extensively tested. Stories of those breaking are few and far between.
I would rather pay attention to the success stories, than the failures. Because in a world without automatic security updates, millions of lazy organizations would be running vulnerable software unknowingly. This already happens, because not all software auto updates. But some is better than none and for all software to be vulnerable by default until a human manually touches it to update it is simply a nightmare to me.
Wikipedia itself is doing fine but they have a bunch of super interesting side projects that they don't advertise much, and aren't doing as well. Wikinews, their news site is shutting down: https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/Single/2026-03-31#News_and_notes (this is really close to april fools hopefully I didn't eat the onion. Or hopefully I did?).
My favorite is wikibooks: http://wikibooks.org/ , which are open source texbooks that can be edited wikipedia style. Their programming one's are really high quality. The idea behind those is that you can export a known good frozen version of them, as a texbook for a class. Related is also wikiversity, which is course curriculum. It's similar, but different.
But they also have a travel voyage, wikivoyage, and more: https://en.wikipedia.org/wiki/Wikipedia:Wikimedia_sister_projects
This is a message to remind myself to share my config later.
I will state that I a, using cloudnativepg for postgres.
The way forgejo actions works, is that it is not a universal thing for every repo. Each repo, can have it's own forgejo actions instance connected to it, running stuff.
The big benefit of that, is that you can make users bring their own actions servers, and not bother to deploy your own.
It has newer packages than Debian.
This is not quite true. They have overlapping release cycles. A new Debian release will ship frozen versions of the latest packages, causing it to have newer packages than most ubuntu releases. Then the new ubuntu release comes out, with and it has newer packages. Ubuntu doesn't universally newer packages than debian. The difference is that Debian ONLY does security updates, and doesn't do feature updates or even bugfixes over it's lifespan. Ubuntu, on the other hand, does ship feature updates and bug fixes, incrementing the package version as they go over the lifespan of an Ubuntu release.
Comparing the bash versions of the latest ubuntu stable version versus the current debian stable, and you'll notice that Debian has a newer bash:
[moonpie@osiris moonpiedumplings.github.io]$ podman run -it --rm debian
root@980ac170ddb4:/# bash --version
GNU bash, version 5.2.37(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
root@980ac170ddb4:/# exit
exit
[moonpie@osiris moonpiedumplings.github.io]$ podman run -it --rm ubuntu
Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/00-shortnames.conf)
Trying to pull docker.io/library/ubuntu:latest...
Getting image source signatures
Copying blob 817807f3c64e done |
Copying config f794f40ddf done |
Writing manifest to image destination
root@1486a1c38699:/# bash --version
GNU bash, version 5.2.21(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
This is Ubuntu 24, the current stable release. 25/questing, the rolling version does have newer/same package versions of debian. But people don't base distros off of the rolling version of ubuntu, only the stable releases.
Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.
Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.
So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.
And that's why supply chain attacks are now a daily occurrence.
No, they're dual licensed. Canonical has users contributing signing a Contributor License agreement, in which they agree to allow Canonical to distribute alternatively licesed, or proprietary versions.
This change was somewhat controversial, and partially why Incus was forked from LXD.
Could it be this?
https://linuxcontainers.org/incus/docs/main/howto/network_bridge_firewalld/#prevent-connectivity-issues-with-incus-and-docker
Incus is a fork of LXD, so if you are usimg LXD the same issues apply.