Lem453

joined 2 years ago
[–] Lem453@lemmy.ca 1 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

I don't think this is the same thing.

Opencloud.eu seems to have been started so they could offer hosting services to EU clients and essentially compete with MS teams and others. You can't download and run their version directly. This isn't a fork the way that opencloud > nextcloud was a change in governance.

OCIS seems to have a great open source product that I'm also hoping to switch over to. I've been trying to get it connected to my authentik SSO (which I have) and just need to figure out how to get admin users on authentik to show up as admin users on ocis.

That's the last thing I need to migrate over fully.

I used to be on owncloud then switched to nextcloud at the fork. In all that time through 3 different servers nextcloud has always been the most brittle app I've hosted

[–] Lem453@lemmy.ca 3 points 1 month ago

This one seems to be the most far along

https://furilabs.com/

[–] Lem453@lemmy.ca 4 points 1 month ago

Its literally not possible to have a top tier phone unless the company can pre order like 10m chips directly from TSMC. No small company will ever be able to do this.

If you want a top tier phone from a non mega corp you will never get a phone. You have to chose some sacrifice for freedom or stick to the mega corps that will always seem to control you

[–] Lem453@lemmy.ca 1 points 2 months ago (1 children)

Amost all banking apps are moving towards using the secure android layer which means they will never work on something that can't fully emulate that. Even in things like grapheneos with gapps installed in a profile they sometimes don't work. If banking apps are what you are waiting for that is already very hard and will only become less likely to work over time.

[–] Lem453@lemmy.ca 2 points 2 months ago* (last edited 2 months ago)

I definitely don't need to but it also costs nothing and retention policy only keeps 5 minute backups for an hour. Then hourly back up for a day. Daily backups for a week, etc. Up to 2 years

[–] Lem453@lemmy.ca 13 points 2 months ago

I think I remember people saying they got it working with this

https://github.com/winapps-org/winapps

That being said, stuff like Fusion 360 changes quite often and even if it works now it might break compatibility with the future update.

FreeCAD has come a long way since with the 1.0 release and the 1.1 release also has lots of good quality of life improvements.

[–] Lem453@lemmy.ca 3 points 2 months ago* (last edited 2 months ago) (3 children)

Exactly this, I have hourly Borg backups and also since my install is entirely on a zfs array I have zfs autosnapshot every 5 mins with retention policy. Takes almost zero cpu or memory overhead extra and means and can do just about anything via command line and revert it back with ease.

That being said, I still don't auto update. Unless having an issue, I just sit down every few months and update everything manually because if its already working why update. If you want the newest features, how will you even know what they are if you don't at least glance at the release notes?

[–] Lem453@lemmy.ca 1 points 2 months ago

Traekif can reverse proxy just about anything include ssh.

That being said I don't. For stuff like ssh I connect with wireguard first then ssh. For stuff like immich I directly expose that behind traefik so I can share images with others. For stuff like vaultwarden I have that behind traefik but internal only so you need wireguard first then you connect to vaultwarden.local.domain.com

[–] Lem453@lemmy.ca 3 points 2 months ago (3 children)

This seems to be the closest

https://furilabs.com/

[–] Lem453@lemmy.ca 4 points 2 months ago (1 children)

FreshRSS self hosted. Just navigate to the website in your browser, install it to android via a browser 'app'. Assign the app to a gesture.

Now i swipe from the left and my RSS opens. Fully self hosted with no tracking beyond the websites you visit.

[–] Lem453@lemmy.ca 4 points 2 months ago* (last edited 2 months ago)

I'm on version 1.143.1 I skipped all the beta timeline stages and updated from 1.135 I think.

About 30k photos and 2k videos

The web interface was great, the android app (pixel 8) was very slow. Even local assets were slow.

Since update, its way faster. Feels really good, responsive, low latency. Sync and backups have been no issue at all.

Sync on android turned itself off after updating, but I turned it back on, selected the same folders to watch and it processed for a few mins and then everything continued to work with no issues.

On the previous version, sync was pretty good. Sometimes it didn't trigger as a background process and i had to manually open the app but it worked. New sync also works well though haven't yet uploaded a large number of things.

[–] Lem453@lemmy.ca 1 points 3 months ago (2 children)

The main feature I want is portion scaling. So I can type the number of servings and everything gets multiplied. Is that possible in obsidian via a plugin or with mkdocs?

12
submitted 9 months ago* (last edited 9 months ago) by Lem453@lemmy.ca to c/selfhosted@lemmy.world
 

I'm trying to setup owncloud with single sign on using Authentik. I have it working for normal users. There is a feature that allows automatic role assignment to users so that admin users from authentik become admin users for owncloud.

This is described here: https://doc.owncloud.com/ocis/next/deployment/services/s-list/proxy.html#automatic-role-assignments.

In this document, they describe having attributes like

- role_name: admin
  claim_value: ocisAdmin

The problem I have is I don't know how to input this information into an Authentik user. As a result, owncloud is giving me this error:

ERR Error mapping role names to role ids error="no roles in user claims" line=github.com/owncloud/ocis/v2/services/proxy/pkg/userroles/oidcroles.go:84 request-id=5a6d0e69-ad1b-4479-b2d9-30d4b4afb8f2 service=proxy userid=05b283cd-606c-424f-ae67-5d0016f2152c

Any authentik experts out there?

I tried putting this under the attributes section of the user profile in authentik:

role_name: admin
claim_value: ocisAdmin

It doesn't work and it won't let me format YAML like the documentation where the claim_value is a child of the role_name.

 

One of the things I like doing in the YouTube app is having infinite scroll on the recommendation screen.

One of the issues I find with apps like Newpipe and Greyjay, is that the recommendations are very limited compared to actual YouTube app.

I'm not sure if there's a better forum to ask this, but does anyone know if the grayjay app allows for unlimited scrolling?

37
submitted 1 year ago* (last edited 1 year ago) by Lem453@lemmy.ca to c/selfhosted@lemmy.world
 

I have a ZFS pool that I made on proxmox. I noticed an error today. I think the issue is the drives got renamed at some point and how its confused. I have 5 NVME drives in total. 4 are supposed to be on the ZFS array (CT1000s) and the 5th samsung drive is the system/proxmox install drive not part of ZFS. Looks like the numering got changed and now the drive that used to be in the array labeled nvme1n1p1 is actually the samsung drive and the drive that is supposed to be in the array is now called nvme0n1.

root@pve:~# zpool status
  pool: zfspool1
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:07:38 with 0 errors on Sun Oct 13 00:31:39 2024
config:

        NAME                     STATE     READ WRITE CKSUM
        zfspool1                 DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            7987823070380178441  UNAVAIL      0     0     0  was /dev/nvme1n1p1
            nvme2n1p1            ONLINE       0     0     0
            nvme3n1p1            ONLINE       0     0     0
            nvme4n1p1            ONLINE       0     0     0

errors: No known data errors

Looking at the devices:

 nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme4n1          /dev/ng4n1            193xx6A         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme3n1          /dev/ng3n1            1938xxFF         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme2n1          /dev/ng2n1            192xx10         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR010
/dev/nvme1n1          /dev/ng1n1            S5xx3L      Samsung SSD 970 EVO Plus 1TB             1         289.03  GB /   1.00  TB    512   B +  0 B   2B2QEXM7
/dev/nvme0n1          /dev/ng0n1            19xxD6         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013

Trying to use the zpool replace command gives this error:

root@pve:~# zpool replace zfspool1 7987823070380178441 nvme0n1p1
invalid vdev specification
use '-f' to override the following errors:
/dev/nvme0n1p1 is part of active pool 'zfspool1'

where it thinks 0n1 is still part of the array even though the zpool status command shows that its not.

Can anyone shed some light on what is going on here. I don't want to mess with it too much since it does work right now and I'd rather not start again from scratch (backups).

I used smartctl -a /dev/nvme0n1 on all the drives and there don't appear to be any smart errors, so all the drives seem to be working well.

Any idea on how I can fix the array?

 

The topic of self-hosted cloud software comes up often but I haven't seen anyone mention owncloud infinite scale (the rewrite in Go).

I started my cloud experience with owncloud years ago. Then there was a schism and almost all the active devs left for the nextcloud fork.

I used nextcloud from it's inception until last year but like many others it always felt brittle (easy to break something) and half baked (features always seemed to be at 75% of what you want).

As a result I decided to go with Seafile and stick to the Unix philosophy. Get an app that does one thing very well rather than a mega app that tries to do everything.

Seafile does this very well. Super fast, works with single sign on etc. No bloat etc.

Then just the other day I discovered that owncloud has a full rewrite. No php, no Apache etc. Check the github, multiple active devs with lots of activity over the last year etc. The project seems stronger than ever and aims to fix the primary issues of nextcloud/owncloud PHP. Also designed for cloud deployment so works well with docker, should be easy to configure via docker variables instead of config files mapped into the container etc.

Anyways, the point of this thread is:

  1. If you never heard of it like me then check it out
  2. If you have used it please post your experiences compared to NextCloud, Seafile etc.
 

Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device).

I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs.

This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing.

This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc).

Badness alert....

All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not.

Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size.

Google/DDG...can't find anyone that has the same issue...very bad

Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased).

Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there???

Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes.

Quick google reveals docker volume purge which deletes many GBs worth of volumes that were old and unused.

By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly.

Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server.

I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable.

IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist.

Things that really should be present to avoid this:

  1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent.
  2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted.
  3. Need some kind of reminder to check in on unused docker containers.
 

Looking for a self hosted YouTube front end with automatic downloader. So you would subscribe to a channel for example and it would automatically download all the videos and new uploads.

Jellyfin might be able to handle the front end part but not sure about automatic downloads and proper file naming and metadata

 

I'm wondering if I can get a device that enables zwave over Ethernet/wifi and connect that to my home assistant setup?

Basically I have a home assistant setup in my house. I want to add a few simple things to my parents place but I want it to all be on the same HA instance.

On the router in my parents place, I can install wireguard to connect it to my LAN. So now my parents network is the same as my LAN network.

I'm looking for a device that can connect to zwave and then send that info over the LAN to my home assistant. Does such a thing exist? Thanks.

 

By local control, I mean if the Z-wave hub is down will the switch still work as a dumb switch and turn the lights on/off?

This is the product I would like to get, but can't find if they allow 'dumb switch' operation. Does anyone have experience with these? https://byjasco.com/ultrapro-z-wave-in-wall-smart-switch-with-quickfit-and-simplewire-white

Thanks!

view more: next ›