tpWinthropeIII

joined 2 years ago
[–] tpWinthropeIII@lemmy.world 2 points 1 week ago

I like the sky graphic.

[–] tpWinthropeIII@lemmy.world 2 points 2 months ago

Pocketpal is what I run. It works well on Android at least.

https://play.google.com/store/apps/details?id=com.pocketpalai

[–] tpWinthropeIII@lemmy.world 3 points 2 months ago

Not exactly. Digits still uses a Blackwell GPU, only it uses unified RAM as virtual VRAM instead of actual VRAM. The GPU is probably a down clocked Blackwell. Speculation I've seen is that these are defective and repurposed Blackwells; good for us. By defective I mean they can't run at full speed or are projected to have the cracking die problem, etc.

[–] tpWinthropeIII@lemmy.world 4 points 2 months ago (3 children)

The new $3000 NVidia Digit has 128 GB of fast RAM in an Apple-M4-like unified-memory configuration, reportedly. NVidia claims it is twice as fast as an apple stack at least at inference. Four of these stacked can run a 405B model, again according to NVidia.

In my case I want the graphics power of an GPU and VRAM for other purposes as well. So I'd rather buy a graphics card. But regarding a 90B model, I do wonder if it is possible with two A6000 at 64 GB and a 3 bit quant.

1
submitted 6 months ago* (last edited 6 months ago) by tpWinthropeIII@lemmy.world to c/usenet@lemmy.world
 

I've been reading recently that bulknews' SSL certificate expired and people have not been able to connect security. They were able to connect only if they disabled secure connections.

This was a little more than a month ago and, crickets, I see no resolution.

If you are on bulknews, can you tell me if it is up and running for you and on a secure connection?

[–] tpWinthropeIII@lemmy.world 2 points 6 months ago (1 children)

I tried Mistral Nemo 12B instruct this morning. It's actually quite good. I'd say it's close to dolphin mistral 8x7B which is a monster in size and very smart, about 45 or 50GB. So I'd say Arli is a good deal Mistral Nemo 12B for 4 or $5 per month and privacy so they claim.

If you don't mind logging for some questions, you can get access to very good or if not the best models at lmsys.org without monetary cost. Just go to the "Arena". This is where you contribute with your blind evaluation by voting which of two is better. I often get models like 4o and sonnet 3.5 by Anthropic, google's best, etc., and at other times many good 70B models. You see two answers at once and vote your favorite between the two. In return, you get "free" access.

Be careful with AMD GPUs as they are not as well supported for local AI. However, support is gaining ground. Some people are doing it but it takes effort and hassle, from what I've read.

[–] tpWinthropeIII@lemmy.world 2 points 6 months ago (3 children)

I know that people are using P40 and P100 GPUs. These are outdated but still work with some software stacks / applications. The P40 GPU, once very cheap for the amount of VRAM, is no longer as cheap as it was probably because folks have been picking them up for inference.

I'm getting a lot done with an NVidia GTX 1080 which only has 8GB VRAM. I can run a quant of dolphin Mixtral 7x8B and it works well enough. It takes minutes to load, almost too long for me, but after that I get 3-5 TPS with some acceptable delay between questions.

I can even run Miqu quants at 2 or 3 bits. It's super smart even at these low quant levels.

llama 3.1 8B runs great with this 1080 8BG GPU at 4_K_M and also 5 or 6_K_M. I believe I can run gemma 9B f16 at 8 bpw.

 

For NSFW images, in card mode where images are shown in full size, the default level of blur allows others to see the essence of the image. It would be nice to be able to increase the level of blur further. Perhaps three levels would be good.