I don't get why a group of users that are willing to run their own LLMs locally and do not want to relay on centralized corporations like openAI or google prefer to discuss using a centralized site like Reddit
Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.
Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.