Shdwdrgn

joined 2 years ago
[–] [email protected] 4 points 1 day ago (1 children)

This is the Mastodon link, but he is quoting a NYT article (from which I'll quote the meat)...

https://tech.lgbt/@[email protected]/114321303656287669

Immigration judges are employees of the executive branch, not the judiciary, and often approve the Homeland Security Department’s deportation efforts. It would be unusual for such a judge, serving the U.S. Attorney General, to grapple with the constitutional questions raised by Mr. Khalil’s case. She would also run the risk of being fired by an administration that has targeted dissenters.

“This court is without jurisdiction to entertain challenges to the validity of this law under the Constitution,” Judge Comans said as she delivered her ruling, apparently reading from a written statement.

[–] [email protected] 8 points 1 day ago (3 children)

I was just reading a comment on Mastodon that immigration judges are not actual judges but are employed by the Administration. Which means they can't even rule on the constitutionality of the information provided -- so they're really nothing but puppets to make the process appear to be legal.

So the next question is... can the ruling be appealed before a real judge?

[–] [email protected] 1 points 1 day ago

Ah that's good. Disk space isn't an issue here, I have around 105TB of storage, but my desktop is an older machine with only 24GB of memory so being lightweight is somewhat of a requirement.

[–] [email protected] 1 points 1 day ago (2 children)

Agreed on Debian stable. Long ago I tried running servers under Ubuntu... that was all fine until the morning I woke up to find all of the servers offline because a security update had destroyed the network card drivers. Debian has been rock-solid for me for years and buying "commercial support" basically means paying someone else to do google searches for you.

I don't know if I've ever tried flatpaks, I thought they basically had the same problems as snaps?

[–] [email protected] 1 points 1 day ago

I'm not sure about other distros, I've just heard a lot of complaints about snaps under Ubuntu. Cura was the snap I tried on my system that constantly crashed until I found a .deb package. Now it runs perfectly fine without sucking up a ton of system memory. Thunderbird is managed directly by debian, and firefox-esr is provided by a Mozilla repo so they all get installed directly instead of through 3rd-party software (although I think I tried upgrading Firefox to a snap version once and it was equally unstable). Now I just avoid anything that doesn't have a direct installer.

[–] [email protected] 1 points 1 day ago (7 children)

That's what I was thinking too... If they're running Ubuntu then they're probably installing packages through snaps, and that's always been the worst experience for me. Those apps lag down my whole system, crash or lock up, and generally are unusable. I run Debian but have run into apps that wanted me to use a snap install. One package I managed to find a direct installer that is rock-solid in comparison to the snap version, and the rest of the programs I abandoned.

Firefox (since it was mentioned) is one of those things I believe Ubuntu installs as a snap, despite there being a perfectly usable .deb package. I applaud the effort behind snap and others to make a universal installation system, but it is so not there yet and shouldn't be the default of any distro.

[–] [email protected] 1 points 4 days ago

Ah that's handy to know the status can show more detail for individual interfaces! I still use /etc/network/interfaces to set up each port so systemd shows them all unmanaged. Maybe some day I'll try switching to that kind of setup.

Where do you find default link files at? There's nothing relevant under /usr/share/doc/systemd/. I had to do a lot of online reading to find an example of selecting them by the MAC address, and the NamePolicy= line was critical to making it actually work.

I don't suppose you happen to know of a way for systemd to manage a DSL connection (CenturyLink)? The old pppd setup seems to be getting hammered by systemd for some reason even though there's no service file for it, but ppp0 refuses to try connecting on the new server until I can log in, stop it, and restart it again. It's like it is trying to connect way too early in the boot and gets locked up.

[–] [email protected] 7 points 5 days ago

Oh China did much more than that... Over the weekend they basically imposed a ban on shipping rare minerals to the US. The stuff used for all those chip fabs Trump wanted to build in the US. Minerals used for medical equipment. Minerals used to make bullets hard or allow missiles to aim for targets. They essentially shut out the majority of tech and military in the US. China knows how to deal with an idiot, and the idiot's response was "more tariffs, that'll teach 'em!"

[–] [email protected] 1 points 5 days ago (2 children)

I did run across it and tried doing a reload, but it looks like according to the help file that doesn't do anything with the link files? I tried networkctl status but that doesn't show any info about what files are being used so I'm not sure what you're seeing? It only gives me a list of the IPs used by each interface, plus some log info at the end of ppp0 going up and down while I was setting it up. If it helps, this is what one of my link files looks like...

[Match]
MACAddress=24:6e:96:4e:21:73

[Link]
NamePolicy=
Name=wan0
[–] [email protected] 7 points 6 days ago

DOW is down around 1000 points already, but I suspect with the news of China stopping all rare mineral shipments to the US, it's gonna be a bloodbath tomorrow morning.

[–] [email protected] 2 points 6 days ago

Yeah frustrating is definitely one word for it. I was up until 4am Saturday morning trying to get this one issue resolved, everything else worked almost perfectly on the new firewall setup except I couldn't get out to the internet. I had already tried renaming the files earlier and that didn't do the trick so I'm not sure why it finally decided to start working, but all eight ports are correctly configured now. (Not that I have much faith in what will happen down the road if one of the network adapters needs replaced.)

And the only reason I had to fight with giving all the network ports new names is because "predictable naming" is NOT... Turns out if you cold boot the machine the interfaces get named one way, and if you do a reboot they get a different set of names, so I had no choice about renaming them by MAC address.

Oh well, maybe someone else will see the post and offer some suggestions. I can't imagine having to do this again on my other servers when I upgrade them from Buster.

 

I built a new firewall under Debian 12. The machine has eight network ports, and during configuration I accidentally used the same name for a couple of the ports in the files under /etc/systemd/network/*.link. I ended up with two link files referencing two different MAC addresses but naming each of them as WAN0, and once systemd got that configuration it wouldn't let it go.

From what I could find online, normally I would just issue systemctl daemon-reload followed by a update-initramfs -u and after a reboot systemd should have had the updated information... but no dice this time. The way I finally discovered the problem was when I noticed under ifconfig that my wan0 port was pointing to the wrong MAC address (even though the link files had been corrected).

After several hours of fighting with it, I finally managed to get it to work by renumbering all of my link files, and now the information for each port matches up correctly. But my real question here is WHY did systemd refuse to read updated link files? Is there another step I should have taken which was mysteriously never mentioned in any of the dozens of web pages I looked at trying to fix this? I really need to understand the proper process for getting it to correctly use these files so I can maintain the machine in the future.

(God I miss the reliability of udev already)

 

I'm building a new rack server (Poweredge R620) and am using the option "consoleblank=600" in the GRUB_CMDLINE_LINUX setting. During the setup I used the wrong memory stick and installed Bullseye, and screen blanking was working correctly there. Since I had already finished nearly all the configuration this week, I thought it would be easier to just do a regular dist upgrade than reloading the whole system.

After upgrading to Bookworm and rebooting, I notice that now when the screen blanking is supposed to kick in (which normally just turns off the display), I am instead getting what looks like rolling static on the screen. I have several other R620 racks running Buster so I know the screen blanking should work with this hardware, but this appears to be an issue specific to Bookworm.

Note that even when I try something like setterm -blank 1 or setterm -powerdown 1 I get the same resulting static after 1 minute. To be clear, this is specifically for the command line, I do not run desktops on my servers.

A google search for the problem has been unsuccessful so I'm hoping someone can point me to a solution or help with the proper search terms.

 

I'm wondering if anyone has found (free) sources of data to use for live elections results, specifically the Presidential race? I've been building a map of poll results but would also like to put something together to watch the race tomorrow night.

 

A 1930s-era breakthrough is helping physicists understand how quantum threads could weave together into a holographic space-time fabric.

 

I have an annoying problem on my server and google has been of no help. I have two drives mirrored for the OS through mdadm, and I recently replaced them with larger versions through the normal process of replacing one at a time and letting the new drive re-sync, then growing the raids in place. Everything is working as expected, with the exception of systemd... It is filling my logs with messages of timing out while trying to locate both of the old drives that no longer exist. Mdadm itself is perfectly happy with the new storage space and has reported no issues, and since this is a server I can't just blindly reboot it to get systemd to shut the hell up.

So what's the solution here? What can I do to make this error message go away? Thanks.

[Update] Thanks to everyone who made suggestions below, it looks like I finally found the solution in systemctl daemon-reload however there is a lot of other great info provided to help with troubleshooting. I'm still trying to learn the systemd stuff so this has all been greatly appreciated!

 

Just in case there are others like myself who rarely check reddit any more, I thought it would be helpful to cross-post this. It won't look like much unless you have the solar eclipse glasses, but I plan to break out my tracker and camera (with solar filters!) to try and get some pics.

 

I have been struggling with this for over a month and still keep running into a brick wall. I am building a new firewall which has six network interfaces, and want to rename them to a known order (wan[0-1], and eth[0-3]). Since Bullseye has stopped honoring udev rules, I have created link files under /etc/systemd/network/ for each interface based on their MAC address. The two WAN interfaces seem to be working reliably but they're not actually plugged into anything yet (this may be an important but untested distinction).

What I've found is that I might get the interfaces renamed correctly when logging in from the keyboard, and this continues to work for multiple reboots. However if I SSH into the machine (which of course is my standard method of working on my servers) it seems to destroy systemd's ability to rename the interface on the next boot. I have played around with the order of the link file numbers to ensure the renumbering doesn't have the devices trying to step on each other, but to no avail. Fixing this problem seems to come down to three different solutions...

  • I can simply touch the eth*.link files and I'm back up afte a reboot.
  • Sometimes I have to get more drastic, actually opening and saving each of the files (without making any changes). WHY these two methods give me different results, I cannot say.
  • When nothing else works, I simply rename one or more of the eth*.link files, giving them a different numerical order. So far it doesn't seem to matter which of the files I rename, but systemd sees that something has changed and re-reads them.

Another piece of information I ran across is that systemd does the interface renaming very early in the boot process, even before the filesystems are mounted, and that you need to run update-initramfs -u to create a new initrd.img file for grub. OK, sounds reasonable... however I would expect the boot behavior to be identical every time I reboot the machine, and not randomly stop working after I sign in remotely. I've also found that generating a new initrd.img does no good unless I also touch or change the link files first, so perhaps this is a false lead.

This behavior just completely baffles me. Renaming interfaces based on MAC addresses should be an extremely simple task, and yet systemd is completely failing unless I change the link files every time I remote connect? Surely someone must have found a reliable way to change multiple interface names in the years since Bullseye was released?

Sorry, I know this is a rant against systemd and this whole "predictable" naming scheme, but all of this stuff worked just fine for the last 24 years that I've been running linux servers, it's not something that should require any effort at all to set up. What do I need to change so that systemd does what it is configured to do, and why is something as simple as a remote connection enough to completely break it when I do get it to work? Please help save my sanity!

(I realize essential details are missing, but this post is already way too long -- ask what you need and I shall provide!)

tl;dr -- Systemd fails to rename network interfaces on the next cycle if I SSH in and type 'reboot'

1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

I've been running systems up to Buster and have always had the 'quiet' option in the grub settings to show the regular service startup messages (the colored ones showing [ok] and such but not all the dmesg stuff). I just upgraded a server to bullseye and there are zero messages being displayed now except an immediate message about not being able to use IRQ 0. Worse, google can't seem to find any information on this. If I remove the quiet option from grub then I see those service messages again, along with all the other stuff I don't need.

What is broken and how do I fix this issue? I assumed it would be safe to upgrade by now but this seems like a pretty big problem if I ever need to troubleshoot a system.

[Edit] In case anyone else finds this post searching for the same issue… Apparently the trick is that now you MUST install plymouth, even on systems that do not have a desktop environment. For whatever reason plymouth has taken over the job of displaying the text startup messages now. Keep your same grub boot parameters (quiet by itself, without the splash option) and you will get the old format of startup messages showing once again. It’s been working fine the old way for 20+ years but hey let’s change something just for the sake of confusing everyone.

[Edit 2] Thanks to marvin below, I now have a final solution that no longer requires plymouth to be installed. Edit /etc/default/grub and add systemd.show_status=true to GRUB_CMDLINE_LINUX_DEFAULT. In my case to full line is:

GRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.show_status=true"

Don't forget to run update-grub after you save your changes.

 

Just curious if any such communities exist here. I built a DIY weather station from 3D prints and an ESP 8266, always looking for improvements on the design, but after a massive downpour yesterday I'm also looking for tips on more accurately calibrating my rain gauge.

view more: next ›