@forestbeasts There's also the Legacy Runtime, that's still native to Linux. I have to do that for some things instead of the 'Scout' or normal one.
KayOhtie
joined 8 years ago
@forestbeasts There's also the Legacy Runtime, that's still native to Linux. I have to do that for some things instead of the 'Scout' or normal one.
@ohlaph Home Assistant OS on a Dell Micro with an i5-6500T in it and 16 GB of RAM.
Runs extremely well, just slow for ESPHome builds so I don't use the add-on anymore. Also while TTS is plenty fast I couldn't use any larger than tiny-int8 or base-int8 for faster-whisper. I offloaded that to my server with my old RTX 2070 in it and have it able to run the turbo model for speech to text.
But no Ollama or similar, fuck using those. I've only ever gotten uselessness out of them and I ain't paying someone else to use theirs to do the same thing just with slightly fewer incidents of "I didn't find a device called <the thing you said but slightly out of order and now the exact same as it's actually called>".