RandomlyRight

joined 2 years ago
[–] [email protected] 2 points 15 hours ago

Super cool! I'd be interested in how to fit this to my head shape too, it’s now on my list of contenders for the concert

[–] [email protected] 1 points 1 day ago

Did anyone get this to run?

[–] [email protected] 3 points 4 days ago (2 children)

Oof I’m sorry, sounds super bad. It’s interesting because I think the frontal lobe is exactly what would make someone overthink stuff or worry too much. So, I’m still considering it ;)

[–] [email protected] 10 points 5 days ago (4 children)

Amazing, can you share where exactly I need to bonk my head for this?

[–] [email protected] 2 points 1 week ago

I wanted to set this up for a while now. Guess it’s time

 
[–] [email protected] 1 points 1 month ago (1 children)

I’ve read about this method in the GitHub issues, but to me it seemed impractical to have different models just to change the context size, and that was the point I started looking for alternatives

 

I’ve been scouring the web to find a very specific config for a mechanical keyboard. It should be full size, have HE switches, and have a silver aluminum case. However the only one I found is the GMMK 3 Pro when you custom order it, but it’s very expensive at 470€ without any switches or keycaps.

Building one myself would definitely be an option, but I’m not sure if there even are any HE 100% PCBs, and the case it seems would have to be custom CNCd because they also don’t exist.

Any pointers would be appreciated!

[–] [email protected] 1 points 1 month ago (3 children)

It was multiple models, mainly 32-70B

[–] [email protected] 4 points 1 month ago

There are many projects out there optimizing the speed significantly. Ollama is unbeaten in the convenience though

[–] [email protected] 3 points 1 month ago (5 children)

Yeah, but there are many open issues on GitHub related to these settings not working right. I’m using the API, and just couldn’t get it to work. I used a request to generate a json file, and it never generated one longer than about 500 lines. With the same model on vllm, it worked instantly and generated about 2000 lines

 

I'm currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • "Hot" model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I'd be happy about any recommendations!

[–] [email protected] 10 points 2 months ago

Über n Salamibrot geht halt nix

[–] [email protected] 85 points 2 months ago (8 children)

Yo I think we Path of Exile gamers made it pretty clear he is not one of us

[–] [email protected] 2 points 2 months ago (1 children)

Take a look at NVIDIA Project Digits. It’s supposed to release in May for 3k usd and will be kind of the only sensible way to host LLMs then:

https://www.nvidia.com/en-us/project-digits/

139
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Finally, the ultimate weapon against boredom while waiting

10
Placebo smile (sh.itjust.works)
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

I don’t know why, but somehow these two words summed up 50% of my life

 
 
 
view more: next ›