Hello fellow Proxmox enjoyers!
I have questions regarding the ZFS disk IO stats and hope you all may be able to help me understand.
Setup (hardware, software)
I have Proxmox VE installed on a ZFS mirror (2x 500 GB M.2 PCIe SSD) rpool . The data (VMs, disks) resides on a seperate ZFS RAID-Z1 (3x 4TB SATA SSD) data_raid.
I use ~2 TB of all that, 1.6 TB being data (movies, videos, music, old data + game setup files, ...).
I have 6 VMs, all for my use alone, so there's not much going on there.
Question 1 - costant disk write going on?
I have a monitoring setup (CheckMK) to monitor my server and VMs. This monitoring reports a constant write IO operation for the disks, ongoing, without any interruption, of 20+ MB/s.

I think the monitoring gets the data from zpool iostat, so I watched it with watch -n 1 'sudo zpool iostat', but the numbers didn't seem to change.
It has been the exact same operations and bandwidth read / write for the last minute or so (after taking a while for writing this, it now lists 543 read ops instead of 545).
Every 1.0s: sudo zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
data_raid 2.29T 8.61T 545 350 17.2M 21.5M
rpool 4.16G 456G 0 54 8.69K 2.21M
---------- ----- ----- ----- ----- ----- -----
The same happens if I use -lv or -w flags for zpool iostat.
So, are there really constantly 350 write operations going on? Or does it just not update the IO stats all too often?
Question 2 - what about disk longevity?
This isn't my first homelab-setup, but it is my first own ZFS- and RAID-setup. If somebody has any SSD-RAID or SSD-ZFS experiences to share, I'd like to hear them.
The disks I'm using are:
- 3x Samsung SSD 870 EVO 4TB for
data_raid - 2x Samsung SSD 980 500GB M.2 for
rpool
Best regards from a fellow rabbit-hole-enjoyer.
I did not run OPNSense, but I have a direct comparison for pfSense as VM on Proxmox VE vs pfSense on a ~400€ official pfSense physical appliance.
I do not feel any internet-speed or LAN-speed differences in the 2 setups, I did not measure it though. The change VM -> physical appliance was not planned.
Running a VM-firewall just got tiring fast, as I realized that Proxmox VE needs a lot more reboot-updates than pfsense does. And every time you reboot your pfSense-VM-Hypervisor, your internet's gone for a short time. Yes, you're not forced to reboot. I like to do it anyway, if it's been advised by the people creating the software I use.
Though I gotta say, the pfSense webinterface is actually really snappy and fast when running on an x86 VM. Now that I have a Netgate 2100 physical pfSense appliance, the webinterface takes a looooong time to respond in comparison.
I guess the most important thing is to test it for yourself and to always keep an easy migration-path open, like exporting firewall-settings to a file so you can migrate easily, if the need arises.
[EDIT] - Like others, I also would advice heavily against using the the same hypervisor for your firewall and other VMs. Bare-Metal is the most "uncomplicated" in terms of extra workload just to have your firewall up and running, but if you want to virtualize your firewall, put that VM on its own hypervisor.