a_fancy_kiwi

joined 2 years ago
[–] a_fancy_kiwi@lemmy.world 2 points 1 week ago

I’d love to but it’s a chicken and egg thing. Regular people don’t understand bitcoin let alone monero. On top of that, you still have fees for converting from a currency to monero and again from monero to a currency, so there’s still a middle man :/

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago

thanks, I'll look into it. Much appreciated

[–] a_fancy_kiwi@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

I understand your view and sympathize deeply but there is a lot wrong in the world today and if I have to divert energy somewhere to try and change something, Patreon's 10/90 split is at the bottom of the list for me at the moment. Regular working class people aren't getting that good of a deal at the jobs they work at. I'm not even talking 90/10, just getting their .01% of the profits in a 1000 person org for their contribution would probably be life changing.

You are right but our frames of reference are different.

[–] a_fancy_kiwi@lemmy.world 8 points 1 week ago (4 children)

I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?

[–] a_fancy_kiwi@lemmy.world 20 points 1 week ago* (last edited 1 week ago) (2 children)

I wouldn’t say these services are nothing. Are they worth 10%? Eh.

A 90/10 split for content creators who otherwise wouldn’t know how to build and operate their own platform doesn’t sound like a terrible deal. It’s not amazing but if there were better options, Patreon may not be so popular.

Edit: I want to clarify. Patreon is a for profit company who has apparently tried raising prices already and back tracked. Eventually, Patreon will try and squeeze out more profit from the creators and the user base will be big enough that Patreon will have the leverage to do so; we’ve all seen it before. I’m not saying Patreon is a good company, I’m not saying they won’t be dicks in the future, I’m not saying the system as it is, is good. I’m only saying 10% isn’t a bad deal when so many other options are worse (ex. Apple taking 30%)

[–] a_fancy_kiwi@lemmy.world 19 points 1 week ago* (last edited 1 week ago) (9 children)

What's wrong with Patreon? They advertise a 10% fee. It may be a little more complicated than that but 10% seems pretty reasonable considering the services they offer.

[–] a_fancy_kiwi@lemmy.world 1 points 3 weeks ago

TIL. Thanks for the information

[–] a_fancy_kiwi@lemmy.world 1 points 3 weeks ago

I’m currently not in a situation where swap is being used so I think my system is doing fine right now. I’m not against swap, I get it’s better to have it than not but my intention was to figure out how close is my system getting to using swap. If it went from not using swap at all to using it constantly, I’d probably want to upgrade my ram, right? If nothing else just to avoid system slow downs and unneeded wear on my SSD

[–] a_fancy_kiwi@lemmy.world 1 points 3 weeks ago (2 children)

From what I can tell, my system isn’t currently using swap at all but it does have 8GB of available swap if needed.

To make sure I’m following what you are saying, if I upgraded my system to 64GB and changed nothing else, and let’s assume ZFS didn’t trying caching more stuff, would there still be a potential for my system to use swap just because the system wanted to even if it wasn’t memory constrained?

[–] a_fancy_kiwi@lemmy.world 4 points 3 weeks ago (1 children)

Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.

link

[–] a_fancy_kiwi@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Assuming the info in this link is correct, ZFS is using ~20GB for caching which makes htop's ~8GB of in use memory make sense when compared with the results from cat /proc/meminfo. This is great news.

My results after running cat /proc/spl/kstat/zfs/arcstats:

c                               4    19268150979
c_min                           4    1026222848
c_max                           4    31765389312
size                            4    19251112856
 

I recently noticed that htop displays a much lower 'memory in use' number than free -h, top, or fastfetch on my Ubuntu 25.04 server.

I am using ZFS on this server and I've read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn't show caching used by the kernel but I'm not sure how to confirm ZFS is what's causing the discrepancy.

I'm also running a bunch of docker containers and am concerned about stability since I don't know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I'm using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?

Server Memory Usage:

  • htop = 8.35G / 30.6G
  • free -h =
               total        used        free      shared  buff/cache   available
Mem:            30Gi        26Gi       1.3Gi       730Mi       4.2Gi       4.0Gi
  • top = MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache
  • fastfetch = 26.54GiB / 30.6GiB

EDIT:

Answer

My Results

tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.

 

This is a continuation of my other post

I now have homeassistant, immich, and authentik docker containers exposed to the open internet. Homeassistant has built in 2FA and authentik is being used as the authentication for immich which supports 2FA. I went ahead and blocked connections from every country except for my own via cloudlfare (I'm aware this does almost nothing but I feel better about it).

At the moment, if my machine became compromised, I wouldn't know. How do I monitor these docker containers? What's a good way to block IPs based on failed login attempts? Is there a tool that could alert me if my machine was compromised? Any recommendations?

EDIT: Oh, and if you have any recommendations for settings I should change in the cloudflare dashboard, that would be great too; there's a ton of options in there and a lot of them are defaulted to "off"

 

tldr: I'd like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I'm not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I'm kind of unsure what the best approach is. Hosting services on the internet has risk and I'd like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What's the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn't get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
 

I've been interested in building a DIY NAS out of an SBC for a while now. Not as my main NAS but as a backup I can store offsite at a friend or relative's house. I know any old x86 box will probably do better, this project is just for the fun of it.

The Orange Pi 5 looks pretty decent with its RK3588 chip and M.2 PCIe 3.0 x4 connector. I've seen some adapters that can turn that M.2 slot into a few SATA ports or even a full x16 slot which might let me use an HBA.

Anyway, my question is, assuming the CPU isn't a bottle neck, how do I figure out what kind of throughput this setup could theoretically give me?

After a few google searches:

  • PCIe Gen 3 x4 should give me 4 GB/s throughput
  • that M.2 to SATA adapter claims 6 ~~GB/s~~ Gb/s throughput
  • a single 7200rpm hard drive should give about 80-160MB/s throughput

My guess is that ultimately, I'm limited by that 4GB/s throughput on the PCIe Gen 3 x4 slot but since I'm using hard drives, I'd never get close to saturating that bandwidth. Even if I was using 4 hard drives in a RAID 0 config (which I wouldn't do), I still wouldn't come close. Am I understanding that correctly; is it really that simple?

 

PSA

After updating to TvOS 17, my Sonos Beam sound bar started making weird crackling sounds and music sounded tinny. Turns out, I had to change the audio format in the Apple TV settings from Stereo to Dolby Digital 5.1 for the issue to be fixed.

Not sure what I had that setting set to before but I’m leaning toward the idea that the update reset the audio format back to default settings. If you are having sound issues after updating, that might be the issue.

 

I occasionally find myself reinstalling home assistant and every time I do, I get stuck on two steps because I forgot the commands and didn't write them down from the last time. I'm writing them below mainly for myself but also for anyone else who may get stuck. For future reference, I'm using Ubuntu 23.04 with Virt-Manager.

Before you begin the installation of the provided qcow2 image, you might want to resize that image from 32G to whatever size you want. ex:

qemu-img resize haos_ova-10.3.qcow2 +68G

Next, you might want to make a network bridge device. Navigate to your netplan folder and backup the yaml file that's in there (your file may be named differently)

cd /etc/netplan

cp ./01-network-manager-all.yaml ./01-network-manager-all.yaml.old

Edit the yaml config.

nano ./01-network-manager-all.yaml

Change the renderer to networkd and add the bridge device (br0). Your ethernet device may not be named enp12s0, make sure to use your ethernet device name. If you are on wifi, look up a netplan wifi config and make adjustments as needed.

network:
  renderer: networkd
  ethernets:
    enp12s0:
      dhcp4: true
  version: 2
  bridges:
    br0:
      dhcp4: yes
      interfaces:
        - enp12s0
      parameters:
        stp: true

save the file. generate and apply the new netplan. WARNING - If you are hosting this on your own network, it's possible the Ubuntu host IP could change. If you were doing these steps over SSH, you might need to find the new IP and reconnect. Static IPs can be set in the netplan config but I usually just do it from my router settings afterwards which is probably why the IP changed.

netplan generate

netplan apply

Now just go through the installation process and when you select your network device, make sure you select "Bridge Device" and the device name is "br0"

Edit 12/15/23 - well, I rebuilt my server again. I used regular Ubuntu desktop this time and I for the life of me I couldn’t get networking to function properly. I ended up buying an Ethernet card and passed it through to the VM

view more: next ›