kumi

joined 3 months ago
[–] kumi@feddit.online 3 points 2 months ago* (last edited 2 months ago)

Right. Then if this would have been a locally hosted scenario, it's like making a post to complain about the service of their electricity company or ISP. Could similarly be reasonably considered on- or offtopic. But I think this sub is more in the spirit of "there is no cloud, just someone elses computer". I'm with mod on this one.

[–] kumi@feddit.online 2 points 2 months ago* (last edited 2 months ago)

Just a small number of base images (ubuntu:, alpine:, debian:) are routinely synced, and anything else is built in CI from Containerfiles. Those are backed up. So as long as backups are intact can recover from loss of the image store even without internet.

I also have a two-tier container image storage anyway which gives redundancfor the built images but thats more of a side-effect of workarounds.. Anyway, the "source of truth" docker-registry which is pushed to is only exposed internally to the one who needs to do authenticated push, and to the second layer of pull-through caches which the internal servers actually pull from. So backups aside, images that are in active use already at least three copies (push-registry, pull-registry, and whoevers running it). The mirrored public images are a separate chain alltogether.

This has been running for a while so all handwired from component services. A dedicated Forgejo deployment looks like it could serve for a large part of above in one package today. Plus it conveniently syncs external git dependencies.

[–] kumi@feddit.online 0 points 2 months ago

If not for political reasons then why limit first version to Google/GitHub rather than starting with generic OIDC (which should include those two anyway)?

We also took your feedback seriously and we are now implementing proper sign-in options like: Google GitHub (and more coming later)

[–] kumi@feddit.online 4 points 2 months ago* (last edited 2 months ago) (2 children)

Sounds like you have a stable life and infra needs and either very lucky or really good with backups and keeping secondaries around. Good on you.

[–] kumi@feddit.online 16 points 2 months ago* (last edited 2 months ago) (4 children)

The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.

Very valuable things for a stress-free life, especially if this is for more than just entertainment and gimmicks.

I'd rather stare at the terminal screen for many hours of my choosing than suddenly having to do it at a bad time for one.. 2... 3... (oh god damn the networking was relying on having changed that weird undocumented parameter i forgot about years ago wasnt it) hours. Oh, and a 0-day just dropped for that service you're running running on the net. That you built from source (or worse, got from an upstream that is now mia). Better upgrade fast and reboot for that new kern.. She won't boot again. The bootdrive really had to crap out right now didn't it? Do we install everything from scratch, start Frankensteining or just bring out the scotch at this point?

Also been at this for a while. I never regretted putting anything as infra-as-code or config management. Plenty of times I wish I had. But yeah, complexity can be insiduous. Going for High Availability and container cluster service mesh across the board was probably a mistake on the other hand...

[–] kumi@feddit.online 2 points 2 months ago

NFS works great for media files and stuff but be careful and know what you are doing before you go put database storage on it.

[–] kumi@feddit.online 2 points 2 months ago* (last edited 2 months ago)

OotL: What's the state of drama with Mr Mullenweg and WPEngine regarding this plugin? Wasn't there a hostile takeover and change of hands during last year? I tuned out at some point.Is this hacked plugin maintained by Matt folks, WPEngine folks or actually unrelated?

[–] kumi@feddit.online 3 points 2 months ago (1 children)

Chimera Linux is very interesting. Has anyone here tried running it?

[–] kumi@feddit.online 2 points 2 months ago* (last edited 2 months ago) (1 children)

One way to go about the network security aspect:

Make a separate LAN(optionally: VLAN) for your internals of hosted services. Separate from the one you use to access internet and use with your main computer. At start this LAN will probably only have two machines (three if you bring the NAS into the picture separately from JF)

  • The server running Jellyfin. Not connected to your main network or internet.

  • A "bastion host" which has at least two network interfaces: One connected outwards and one inwards. This is not a router (no IP forwarding) and should be separate from your main router. This is the bridge. Here you can run (optional) VPN gateway, SSH server. And also an HTTP reverse proxy to expose Jellyfin to outside world. If you have things on the inside that need to reach out (like package updates) you can have an HTTP forward proxy for that.

When it's just two machines you can connect them directly with LAN cable, when you have more you add a cheap network switch.

If you don't have enough hardware to split machines up like this you can do similar things with VMs on one box but that's a lot of extra complexity for beginners and you probably have enough of new things to familiarize yourself with as it is. Separating physically instead of virtually is a lot simpler to understand and also more secure.

I recommend firewalld for system firewall.

[–] kumi@feddit.online 25 points 2 months ago* (last edited 2 months ago)

Everything in there is relevant and applies to flatpaks too. Being aware of the risks is important when using alternative distribution methods. With power, responsibility.

[–] kumi@feddit.online 34 points 2 months ago* (last edited 2 months ago)

Tricking users into using Snap without realizing it, making them unknowingly vulnerable to exploits like this, would be really really bad and unethical on Canonical’s part.

That is not what is happening at all.

Just so nobody is confused or gets afraid of their install: Getting the Firefox snap installed via Ubuntus apt package does not make users vulnerable to what is talked about here and is just as safe as the apt package version. For Firefox snaps might even be safer since you will probably get security patches earlier than with apt upgrades and get some sandboxing. In both cases you are pulling signed binaries from Canonical servers.

The post is about third-party fake snaps. If you run a snap install command from a random web site or LLM wkthout checking it, or making a typo, then you are at risk. If Ubuntu didnt have snaps, this would be malicious flatpaks. If Ubuntu didnt have flatpaks, it would be malicious PPAs. And so on. Whatever hosted resource gets widely popular and allows users to blindly run and install software from third-parties will be abused for malware, phishing, typosquatting and so on. This is not the fault of the host. You can have access to all the apps out there you may ever want or you can safely install all your apps from one trusted source. But it's an illusion that you can never have both.

People have opinions about if snaps are a good idea or not and thats fine but there shouldnt be FUD. If you are using Canonicals official snaps and are happy with them you dont have to switch.

 

tl;dr: There’s a relentless campaign by scammers to publish malware in the Canonical Snap Store. Some gets caught by automated filters, but plenty slips through. Recently, these miscreants have changed tactics - they’re now registering expired domains belonging to legitimate snap publishers, taking over their accounts, and pushing malicious updates to previously trustworthy applications. This is a significant escalation.
Context: Snaps are compressed, cryptographically signed, revertable software packages for Linux desktops, servers, and embedded devices.

 

An overview of the work done on the ALPM project in 2024 and 2025.

 

How to test and safely keep using your janky RAM without compromising stability using memtest86+ and the memmap kernel param.

 

How to test and safely keep using your janky RAM without compromising stability using memtest86+ and the memmap kernel param.

 

How to test and safely keep using your janky RAM without compromising stability using memtest86+ and the memmap kernel param.

view more: next ›