hamsda

joined 5 months ago
[–] hamsda@feddit.org 2 points 3 weeks ago* (last edited 3 weeks ago)

I did not run OPNSense, but I have a direct comparison for pfSense as VM on Proxmox VE vs pfSense on a ~400€ official pfSense physical appliance.

I do not feel any internet-speed or LAN-speed differences in the 2 setups, I did not measure it though. The change VM -> physical appliance was not planned.

Running a VM-firewall just got tiring fast, as I realized that Proxmox VE needs a lot more reboot-updates than pfsense does. And every time you reboot your pfSense-VM-Hypervisor, your internet's gone for a short time. Yes, you're not forced to reboot. I like to do it anyway, if it's been advised by the people creating the software I use.

Though I gotta say, the pfSense webinterface is actually really snappy and fast when running on an x86 VM. Now that I have a Netgate 2100 physical pfSense appliance, the webinterface takes a looooong time to respond in comparison.

I guess the most important thing is to test it for yourself and to always keep an easy migration-path open, like exporting firewall-settings to a file so you can migrate easily, if the need arises.

[EDIT] - Like others, I also would advice heavily against using the the same hypervisor for your firewall and other VMs. Bare-Metal is the most "uncomplicated" in terms of extra workload just to have your firewall up and running, but if you want to virtualize your firewall, put that VM on its own hypervisor.

[–] hamsda@feddit.org 3 points 1 month ago (1 children)

Sadly, it seems I cannot replace the disks one-by-one. At least not if I don't upgrade the SSD size to greater than 4TB at the same time.

The consumer 4TB SSDs yield 3,64 TiB, whereas the datacenter 4TB SSDs seem to yield 3,49 TiB. As far as I know, one cannot replace a zfs raid z1 drive with a smaller one. I'll have to watch the current consumer SSDs closely and be prepared for when I'll have to switch them.

I'm not all too sure about buying used IT / stuff in general from ebay, but I'll have a look, thanks!

[–] hamsda@feddit.org 4 points 1 month ago

Thank you very much for your input, I'll definitely have to go with business drives whenever the current ones die.

Thankfully, I do have monitoring for SMART data and drive health, so I'll be warned before something bad happens.

[–] hamsda@feddit.org 3 points 1 month ago (3 children)

Thank you very much for your input. I'll definitely have to go for the business models whenever the current ones die.

I knew I would make some mistake and learn something new, with this being my first real server-PC (instead of mini-pc or raspberry pi) and RAID. I just wished it wasn't that pricey of a mistake :(

[–] hamsda@feddit.org 1 points 1 month ago* (last edited 1 month ago) (2 children)

Yeah, I guess I should've put like +50% more money into it and gotten some Enterprise SSDs instead. Well, what's done is done now.

I'll try replacing the disks with enterprise SSDs when they die, which will probably happen fast, seeing as the wearout is already at 1% after 1 month of low usage.

What do you think about Samsung OEM Datacenter SSD PM893 3,84 TB?

Thanks for taking the time to answer!

[–] hamsda@feddit.org 2 points 1 month ago* (last edited 1 month ago) (5 children)

So I just looked it up: According to Proxmox VE "disks" interface, my SATA SSD drives have 1% wearout after ~1 month of low usage. That seems pretty horrible.

I guess I'm going to wait until they die and buy enterprise SSDs as a replacement.

I'm definitely not going to use HDDs, as the server is in my living room and I'm not going to tolerate constant HDD sounds.

[EDIT] I don't even have a cluster, it's just a single Proxmox VE on a single server using ZFS and it's still writing itself to death.

[EDIT2] What do you think about Samsung OEM Datacenter SSD PM893 3,84 TB?

Thanks for your input!

 

Hello fellow Proxmox enjoyers!

I have questions regarding the ZFS disk IO stats and hope you all may be able to help me understand.

Setup (hardware, software)

I have Proxmox VE installed on a ZFS mirror (2x 500 GB M.2 PCIe SSD) rpool . The data (VMs, disks) resides on a seperate ZFS RAID-Z1 (3x 4TB SATA SSD) data_raid.

I use ~2 TB of all that, 1.6 TB being data (movies, videos, music, old data + game setup files, ...).

I have 6 VMs, all for my use alone, so there's not much going on there.

Question 1 - costant disk write going on?

I have a monitoring setup (CheckMK) to monitor my server and VMs. This monitoring reports a constant write IO operation for the disks, ongoing, without any interruption, of 20+ MB/s.

I think the monitoring gets the data from zpool iostat, so I watched it with watch -n 1 'sudo zpool iostat', but the numbers didn't seem to change.

It has been the exact same operations and bandwidth read / write for the last minute or so (after taking a while for writing this, it now lists 543 read ops instead of 545).

Every 1.0s: sudo zpool iostat

              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data_raid   2.29T  8.61T    545    350  17.2M  21.5M
rpool       4.16G   456G      0     54  8.69K  2.21M
----------  -----  -----  -----  -----  -----  -----

The same happens if I use -lv or -w flags for zpool iostat.

So, are there really constantly 350 write operations going on? Or does it just not update the IO stats all too often?

Question 2 - what about disk longevity?

This isn't my first homelab-setup, but it is my first own ZFS- and RAID-setup. If somebody has any SSD-RAID or SSD-ZFS experiences to share, I'd like to hear them.

The disks I'm using are:

Best regards from a fellow rabbit-hole-enjoyer.

[–] hamsda@feddit.org 6 points 1 month ago* (last edited 1 month ago) (1 children)

I don't know about tailscale, but it seems pihole has got you covered with local DNS, if you're willing to set the local DNS records manually.

I use pihole as selfhosted DNS server for all my servers and clients. I don't have many local DNS records (only 2), so if you handle a great amount of ever-changing DNS records, this might not be for you.

[–] hamsda@feddit.org 5 points 1 month ago

Es hört sich schon irgendwie sehr viel an. Aber man kann sich alles ziemlich hoch und teuer konfigurieren, auch wenn es halbwegs günstig startet.

Bei Hetzner kann ich mir auch einen echten Server mieten für 960 € im Monat mit Standort Deutschland

  • physische Hardware in einem Rechenzentrum
  • AMD EPYC 9454P 48 Core / 96 Threads
  • 640 GB DDR5 ECC RAM
  • 2x ~4 TB NVME Disks
  • 6x ~8 TB NVME Disks

Da zahlst halt auch den Hardware-Support von wegen Teile austauschen wenn notwendig etc.

[–] hamsda@feddit.org 9 points 1 month ago (1 children)

To me it seems like:

  • you want to do a lot of stuff yourself on arch
  • but there's quite some complicated stuff to learn and try

I'd try Proxmox VE and, if you're also searching for a Backup Server, Proxmox Backup Server.

I recommend these because:

  • Proxmox VE is a Hypervisor, you can just spin up Arch Linux VMs for every task you need
  • Proxmox VE, as well as Proxmox BS are open source
  • you can buy a license for "stable updates" (you get the same updates, but delayed, to fix problems before they get to you)
  • includes snapshots, re-rolls, full-backups, a firewall (which you can turn on or off for every VM), ...

I personally run a Proxmox VE + Proxmox BS setup in 3 companies + my own homelab.

It's not magic, Proxmox VE is literally Debian 13 + qemu + kvm with a nice webui. So you know the tech is proven, it's just now you also get an easy to use interface instead of virsh console commands or virt-manager.

I personally like a stable infrastructure to test and run my important and experimental tuff upon. That's why I'm going with this instead of managing even the hypervisor myself with Arch.

[–] hamsda@feddit.org 20 points 2 months ago

Thank you very much. I sent this to my coworker who expressed interest in switching to vim :)

[–] hamsda@feddit.org 1 points 3 months ago* (last edited 3 months ago)

Sadly, that did not solve my problems either.

There's still no voice output except for the first sentence of all of the navigation apps and when I test the voice navigation with osmand development plugin.

GLaDOS voice is working though, just not when navigating. So, when you actually need it.

[–] hamsda@feddit.org 1 points 3 months ago (1 children)

Uh yeah, GLaDOS voice! I'll try SherpaTTS tomorrow.

I'll report back tomorrow, thank you very much!

 

Dear GrapheneOS community,

I recently switched to GrapheneOS with my new Pixel 9a. All in all it works well, but there's still one or two things I just cannot get to work.

Whenever I start GPS navigation, I can hear a voice saying a single sentence and then just stopping and silence for the rest of the drive.

I tried the following apps:

  • Osmand
  • Organic Maps
  • CoMaps

I have installed RHVoice as TTS software.

When starting navigation, Osmand tells me how long my journey will take and how much distance I have to drive and that's the last thing I ever hear from Osmand voice navigation.

Organic Maps navigation tells me the first thing I need to do on the drive (e.g. "turn right in 400m") and then not a single word for the rest of the drive.

CoMaps seems to be the same.

If I enable Osmand Development Plugin in Osmand, I can then test voice output, which works perfectly. It just does not work when I need it and I have no idea why.

Does anyone know what I'm doing wrong?

[EDIT]

If anyone comes here from google or somewhere to find help, I sadly have no solution to present. I just gave up and uninstalled all navigation / TTS apps.

I tried different navigation apps, so it isn't a navigation app problem. I tried different TTS engines, so it's not an TTS engine problem either.

view more: next ›