this post was submitted on 21 Feb 2026
56 points (98.3% liked)

Selfhosted

56775 readers
429 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Any experiences with a self-hosted assistant like the modern Google Assistant? Looking for something LLM-powered that is smarter than older assistants that would just try to call 3rd party tools directly and miss or misunderstand requests half of the time.

I'd like integration with a mobile app to use it from the phone and while driving. I see Home Assistant has an Android Auto integration. Has anyone used this, or another similar option? Any blatant limitations?

top 30 comments
sorted by: hot top controversial new old
[–] avidamoeba@lemmy.ca 3 points 3 hours ago

HA with local LLM on Ollama. Can imtegrate the Android app as the default phone assistant. I don't think it can use a wake word on the phone though. I invoke it by holding the power button, like a walkie.

[–] grue@lemmy.world 11 points 15 hours ago (1 children)

I don't like the guy's breathless over-enthusiasm, but NetworkChuck has a video on how to integrate LLM-based voice assistants with HomeAssistant using Whisper and Ollama.

[–] eager_eagle@lemmy.world 8 points 14 hours ago (1 children)

ah yes, I stopped watching the guy because of that and the clickbait, but he does make some interesting content sometimes.

[–] Mubelotix@jlai.lu 1 points 2 hours ago

He covers interesting subjects but he believes his audience is dumb and unknowledgeable which leads to this. He thinks he has to adapt to the regular youtube game to retain us, but he is just boring everyone

[–] wildbus8979@sh.itjust.works 21 points 22 hours ago (1 children)

Home Assistant can absolutely do that. If you are ok with simple intent based phrasing it'll do it out of the box. If you want complex understanding and reasoning you'll have to run a local LLM, like Llama, on top of it

[–] eager_eagle@lemmy.world 4 points 21 hours ago (1 children)

yeah, that's what I'm looking for. Do you know of a way to integrate ollama with HA?

[–] lyralycan@sh.itjust.works 3 points 21 hours ago* (last edited 20 hours ago) (1 children)

I don't think there's a straightforward way like a HACS integration yet, but you can access Ollama from the web with open-webui and save the page to your homepage:

Just be warned, you'll need a lot of resources depending on which model you choose and its parameter count (4B, 7B etc) -- Gemma3 4B uses around 3GB storage, 0.5GB RAM and 4GB of VRAM to respond. It's a compromise as I can't get replacement RAM, and tends to be wildly inaccurate with large responses. The one I'd rather use, Dolphin-Mixtral 22B, takes 80GB storage and 17GB min RAM, the latter of which I can't afford to take from my other services.

[–] excursion22@piefed.ca 13 points 18 hours ago* (last edited 18 hours ago) (2 children)

There's an Ollama integration that adds it as a conversation agent.

[–] eager_eagle@lemmy.world 4 points 14 hours ago

ah, this puts it together and it's exactly what I was looking for, thanks

[–] hendrik@palaver.p3x.de 4 points 17 hours ago* (last edited 17 hours ago) (1 children)

And there's another custom component, integrating all servers with an OpenAI-compatible API endpoint: https://github.com/jekalmin/extended_openai_conversation

[–] cymor@midwest.social 5 points 17 hours ago

Try ollama.com you can download and try whatever you want. Quality is mostly how much VRAM your video card has.

[–] hendrik@palaver.p3x.de 4 points 17 hours ago* (last edited 17 hours ago)

Livekit can be used to build voice assistants. But it's more a framework to build an agent yourself, not a ready-made solution.

[–] irotsoma@piefed.blahaj.zone 4 points 20 hours ago

You have to run an LLM of your own and link it, if you want quality even close to approaching Google, but the Home Assistant with the Nabu Casa "Home Assistant Voice Preview Edition" speakers are working well enough for me. I don't use it for much beyond controlling my home automation components, though. But it's still very early tech anf it doesn't understand all that much unless you add a lot of your own configurations. I eventually plan to add an LLM, but even just running on the home assistant yellow hardware with a raspberry pi compute module 5 works ok for the basics though there is a slight delay.

I haven't tried, but Nabu Casa also offers a subscription service for the voice processing if you want something more robust and can't host your own LLM, but thst means sending your data out, even if they have good privacy policies, which I'm not interested in, because while I somewhat trust Nabu Casa's current business model and policies, being hosted in the US means it's susceptible to the current regime's police-state policies. I'm waiting for hardware costs to recover from the AI bubble to self host an LLM, personally.

Home Assistant can do that, the quality will really depend on what hardware you have to run the LLM. If you only have a CPU you'll be waiting 20 seconds for a response, which could also be pretty poor if you have to run a small quantized model

[–] Kirk@startrek.website 2 points 21 hours ago (1 children)

Maybe things have improved but the last time I tried the Home Assistant er- assistant, it was garbage at anything other than the most basic commands given perfectly.

[–] avidamoeba@lemmy.ca 1 points 3 hours ago (1 children)

You gotta hook it to a local LLM. Then it's boss.

[–] Kirk@startrek.website 2 points 1 hour ago (1 children)

Any pointers where to begin?

[–] avidamoeba@lemmy.ca 2 points 21 minutes ago* (last edited 20 minutes ago)

Install Ollama on a machine with fast CPU or GPU and enough RAM. I currently use Qwen3 that takes 8GB RAM. Runs on an NVIDIA GPU. Running it on CPU is also fast enough. There's a 4GB version which is also decent for device control. Add Ollama integration in Home Assistant. Connect it to the Ollama on the other machine. Add Ollama as conversation agent to the Home Assistant's voice assistant. Expose HA devices to be controllable. That's about it on high level.