It's not federated, just easy to self host and point custom clients at.
moonpiedumplings
Faster than my edits, I see.
Docker compose's don't really need to be maintained though. As long as the app doesn't need new components old docker composes should work.
EDIT: Oops, it does look like spacebarchat's docker images have last been updated over 2 years ago:
https://hub.docker.com/r/spacebarchat/server
EDIT2: Although this is outdated, I think their github repo has an action to autobuild docker images on pushes. Still investigating.
EDIT3: Okay, they don't seem to be actually ran.
But using nix to build a docker image is pretty cool.
EDIT4: Oh shit, the docker image build workflows were added just 2 hours ago. Of course they haven't been ran!
Docker support soon, probably.
EDIT5: the workflow ran, but it looks like it's private for now.
https://github.com/spacebarchat/spacebarchat
Literally reverse engineered discord, made open source.
It's not that hard though. There are companies that offer data recovery as a service. If the value of the data on those drives exceeds the cost of those services then it becomes worth it to fish one of the drives out of the dumpster and take it there.
This is not truly foolproof. Data can still be recovered from the spinning metal platter since it can theoretically be removed and put into a recovery device, even in a broken state.
Im addition to that, hard drives/ssd's sometimes have small flash memory chips, from which data can sometimes be recovered.
If you want it to actually be unrecoverable then you have to actually ensure all parts thay store data are truly deleted/wiped, which is more than just the core platter. Or just use encryption and throw away the key, since all data going through the tiny OS on these devices will be encrypted. Or just store them forever in a vault.
Unless you are running at really large scales, or really small scales and trying to fit stuff that quite fit, memory compression may not be significant enough of an optimization to spend a lot of time experimenting a lot. But I'm bored and currently on an 8 GB device so here are my thoughts dumped out from my recent testing:
Zram vs Zswap (can be done at hypervisor or at host):
- One or the other is commonly enabled on many modern distros. It is a perfectly reasonable position to simply use the distro's defaults and not push it any further
- Zram has much, much better compression, but suffers from LRU inversion. Essentially after zswap is full, fresh pages (memory) goes to the swap instead. Since these pages will probably be needed, it will be slower to get them from the disk then to get them from zram.
- Zswap has much, much worse compression but cold, unused pages are moved to swap automatically, freeing up space
- I am investigating ways to get around the above. See my thoughts on this and other differences here: https://github.com/moonpiedumplings/moonpiedumplings.github.io/blob/main/playground/asahi-setup/index.md#memory-optimization
Kernel same page merging (KSM) (would be done at hypervisor level) (esxi also has an equivalent feature called something different):
- Only really efficient if you have lots of the same virtual machines
- Used to overcommit (promise more ram than you physically have)
- Dangerous, but highly cost saving. Many cheap VPS providers do this in order to save money. You can run four 8 GB vps on 24 GB of ram and take a semi-safe bet that not all of the memory will be used.
In my opinion, the best thing is to enable zram or zswap at the virtual machine level and kernel same page merging at the hypervisor level, assuming you take into account and accept the marginal security risk and slightly weaker isolation that comes with KSM. There isn't any point running zswap at two layers, because the hypervisor is just gonna spend a lot of time trying to see if it can compress stuff that's already been compressed. Than KSM deduplicates memory across hosts. Although you may actually see worse savings overall if zram/zswap compression is only semi-deterministic and makes deduplication ahrder.
I agree with the other commenter as well about zram being weird with some workloads. Like I've heard of I think it was blender interacting weirdly with zram since zram is swap, making less total memory available in ram, whereas zswap compresses memory. If you really need to know you gotta test.
Does the script attempt to run though? If linkedin runs this and other scripts it would explain why the site is so bloated.
Does this work on firefox? Does ublock origin block this?
Is this why linkedin eats so damn much ram. It eays 300 mb for a single tab. I opened 3 linkedin tabs and it lagged my entire computer.
Sometimes copyrighted stuff gets dmca'd?
Fermi is just a custom client for discord/spacebar. It's not federated.