curbstickle

joined 3 months ago
[–] curbstickle@anarchist.nexus 3 points 3 weeks ago (1 children)

We shouldn't be complacent or lose hope.

I'm... not exactly a fan of the existence of a state in general, but I'll happily support more dem-socialists and the ousting of establishment democrats (aka "traditional" republicans with a blue sign).

[–] curbstickle@anarchist.nexus 2 points 3 weeks ago* (last edited 3 weeks ago) (17 children)

That usually means something has changed with the storage, I'd bet there is a lingering reference in the .conf to the old mount.

The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?

For the VM, its really easy. Go to the VM, and open up the console. If you're logging in as root, commands as is, if you're logging in as a user, we'll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)

  1. Update your packages - apt update && apt upgrade
  2. Install the nfs tools - apt install nfs-common
  3. Create your directory where you're going to mount it mkdir /mnt/NameYourMount
  4. Lets mount it to test - sudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMount
  5. List out the files and make sure its working - ls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we'll see whats going on.
  6. If it looks good, lets make it permanent - nano /etc/fstab
  7. Add this line, edited as appropriate 192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0
  8. Save and close - ctrl+x then y
  9. Reboot your VM, then login again and ls -la /mnt/NameYourMount to confirm you're all set
[–] curbstickle@anarchist.nexus 72 points 3 weeks ago (3 children)

To be fair, the president has been doing that for years.

[–] curbstickle@anarchist.nexus 7 points 3 weeks ago (3 children)

Kind of.

Its a barometer, and it usually features some degree of losses for the party in power. The degree of which is a big indicator for the midterms. While the presidents party tends to lose out overall in off-years, the degree to which can vary drastically.

Off year elections do not have sweeping losses or historically significant voter turnout though. They also dont generate anywhere near the kind of media attention this year saw, or the historic amounts of money being put into a mayoral election as we saw in NYC.

We shouldn't downplay the significance of yesterdays election. And it was significant.

[–] curbstickle@anarchist.nexus 15 points 3 weeks ago (5 children)

These weren't midterms, they were an off-year election. The midterms will be next year.

They are usually low turnout, but saw a historic turnout in NY and NJ, not sure about the others offhand.

Off-years you do not typically see this kind of sweeping response, its absolutely atypical.

I'm not suggesting its time to stop, but clarifying that this is not the norm.

[–] curbstickle@anarchist.nexus 2 points 3 weeks ago* (last edited 3 weeks ago) (19 children)

Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.

  • Shut down 101 and 102
  • In the Web GUI, go to the JF container, go to resources, and remove that mount point. Take note of where you mounted it! We're going to mount it back in the same spot.
  • Go to the web GUI, go to Storage, select the SMB mount of the NAS, and select Edit - then uncheck Enable.
  • With it selected, go ahead and click remove
  • For both 101 and 102, lets make sure they aren't set to start from boot for now. Go to each of them, and under the options section, you'll see "Start at Boot". If they say yes, change it to No (click edit or double click and remove the check from the box).
  • Reboot your server
  • Lets check that the mounting service is gone, go to the host then shell, and enter systemctl list-units "*.mount"
  • If you don't see mnt-pve-thenameofthatshareyoujustremoved.mount, its removed.

That said - I like to be sure, so lets do a few more things.

  • umount -R /mnt/pve/thatshare - Totally fine if this throws an error
  • Lets check the mounts file. cat /proc/mounts - a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line. nano /proc/mounts, find the line if its still there, and remove it. ctrl+x then y to save.

Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.

Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again: ls -la /mnt/pve/thenameofyourmount/

Is your data showing up? If so, great! If not, lets find out whats going on.

Now lets add back to your container mount. You'll need to add that mount point back in again with: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media (however you had it mounted before in that second step).

Now start the container, and go to the console for the container. ls -la /whereveryoumountedit - if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable "Start at Boot" if you'd like it to.

Onto the VM, what distribution is installed there? Debian, fedora, etc?

[–] curbstickle@anarchist.nexus 2 points 3 weeks ago (21 children)

For the record, I prefer NFS

And now I think we may have the answer....

OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we'd mount NFS directly inside the VM.

Did you make an LXC or a VM for 102?

If its an lxc, we can work out the command and figure out what's going on.

If its a VM, we'll get it mounted with NFS utils, but how is going to depend on what distribution you've got running on there (different package names and package managers)

[–] curbstickle@anarchist.nexus 45 points 3 weeks ago

It hit him right in the fee fees, and badly damaged his already fwagile mascuwinity.

[–] curbstickle@anarchist.nexus 1 points 3 weeks ago (1 children)

Ok, lets take a step back then and check things this way.

In shell (So datacenter, the host, then shell), if you enter ls -la /mnt/pve/thenameofyourmount/, do you get an accurate and current listing of the contents of your nas?

[–] curbstickle@anarchist.nexus 2 points 3 weeks ago (3 children)

If you've got nothing under it, yeah.

OK, what I'd probably do is shutdown proxmox, reboot your nas, wait for the nas to be fully up and running (check if you can access it from your regular computer over the lab), then boot up the proxmox server.

Then run that command again, you should see a result.

Its possible you've got some conflicting stuff going on if you did manual edits for the storage, which may need to be cleaned up.

[–] curbstickle@anarchist.nexus 1 points 3 weeks ago (5 children)

What do you get putting in:

showmount <ip address of NAS>

[–] curbstickle@anarchist.nexus 1 points 3 weeks ago (7 children)

No worries!

So if you've got docker containers going already, you don't need them to be LXCs.

So why not keep them docker?

Now there are a couple of approaches here. A VM will have a bit higher overhead, but offers much better isolation than lxc. Conversely, lxc is lightweight but with less host isolation.

If we're talking the *arr stack? Meh, make it an lxc if you want. Hell, make it an lxc with dockge installed, so you can easily tweak your compose files from the web, convert a docker run to compose, etc.

If you have those configs (and their accompanying data) stored on the NAS itself - you dont have to move them. Let's look at that command again...

pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media

So let's say your container data is stored at /opt/dockerstuff/ on your NAS, with subdirectories of dockerapp1 and dockerapp2. Let's say your new lxc is number 101. You have two options:

  • Mount the entire directory

pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff,mp=/opt/dockerstuff

  • Mount them specifically for each container to get a bit more granular in control
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff/dockerapp1,mp=/opt/dockerstuff/dockerapp1

pct set 101 -mp1 /mnt/Pve/NAS/opt/dockerstuff/dockerapp2,mp=/opt/dockerstuff/dockerapp2

Either will get you going

view more: ‹ prev next ›