hrrrngh

joined 2 years ago
[–] hrrrngh@awful.systems 7 points 2 weeks ago (2 children)

context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren't able to use claude or whatever. Answer: yes, these things are still terrible

but while I was searching I found this comment and the fact that people hated it is so funny to me. It's literally the person who posted the thread. less thinking and words, more hype links please.

conversationhttps://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/

32k context? is that usable for coding?

(OP's response, sitting at a steady -7 points)

LLMs are useless anyway so, okay-ish, depends on your task obviously

If LLMs were actually capable of solving actual hard tasks, you'd want as much context as possible

A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.

That's one way to start, then we get into the more debatable stuff...

Obviously text repeats a lot and doesn't always encode new information each token. In fact, it's worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation

*emphasis added by me

[–] hrrrngh@awful.systems 11 points 3 months ago* (last edited 3 months ago)

https://superuser.com/questions/1930445/can-i-delete-the-chromes-optguideondevicemodel-safely-its-taking-up-4gb/1930446#1930446

Can I delete the Chrome's OptGuideOnDeviceModel safely? It's taking up 4GB

. . .

I also founds mentions of bunch of various flags you can potentially disable to turn the whole feature off, e.g. chrome://flags/#optimization-guide-on-device-model - but I've seen at least 5 other ones mentioned in several sources, with various people claiming for each that they don't work . . .

Now Chrome can hog your VRAM too. Yay

Don't worry if you only have 8GB and need the other half for anything, Chrome will probably relinquish it. This is very intelligent, as all the browser has to do is simply load another 4GB file from disk the next time you do anything.

[–] hrrrngh@awful.systems 7 points 3 months ago

https://www.psychologytoday.com/us/blog/harnessing-hybrid-intelligence/202511/the-psychology-of-collective-abandonment

Article I found randomly because... I was trying to add the Psychology Today blog to uBlacklist so I stop seeing their articles lol

It lost me a little towards the end, but it's heartwarming to imagine a world where tech fascists screaming about the Antichrist have a few* billion dollars less and actual charities have a few more.

*where few = [3, ∞)

[–] hrrrngh@awful.systems 7 points 3 months ago

Many of these tools are useful, and don’t use generative AI – that is, AI that creates – but use AI to summarize texts or alter images.

Oh no, has this become the common definition of generative AI? I'm guessing some AI company must have tried to launder the name and make it seem less bad. Both of those examples are clear-cut generative AI.

[–] hrrrngh@awful.systems 7 points 3 months ago

I finally became fed up with it and got around to writing a uBlock Origin filter that removes the AI overview, the AI results in the "People also ask" section, and especially the AI results in the "Things to know" section that usually covers health and drug information. There is literally so much AI bloat taking up the search page it's crazy.

[–] hrrrngh@awful.systems 5 points 3 months ago

Fortunately the EA side is a little more on the nose sometimes.

One of my first wakeup calls was they offered to mail me a book for free🚩🚩🚩 (it was from 80,000 hours)

[–] hrrrngh@awful.systems 11 points 3 months ago* (last edited 3 months ago) (1 children)

I've seen the same thing and it's reassuring lol.

I lurk on subreddit drama and curated tumblr, and I feel like the common reaction to LW has gone from a few negative comments and "really? that's crazy"'s five years ago to being much more aware. Years ago you'd see maybe one person familiar with them and then a couple people respond who are totally out of the loop and maybe you'd see one crazy rationalist chime in to nuh-uh them. Now, anything rationalist-related usually has a bunch of people bringing up the harry potter or acausal robot god stuff right away.

I use the tag feature a lot in RES to keep track of people who I like hearing what they have to say. Years ago I mostly saw the same names when LW stuff came up, but now there's always a ton of people I've never seen before who are familiar with it.

It's also reassuring because I really don't want to be the person to say anything first and it's easier to chime in on a discussion someone else has already started.

[–] hrrrngh@awful.systems 6 points 3 months ago

Why not make an evil time travelling robot controlled by the illuminati? ~~bro it's even called Alexander~~

Maybe they simply yearn to write Final Fantasy villains

[–] hrrrngh@awful.systems 12 points 3 months ago (15 children)

oh no not another cult. The Spiralists????

https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

it's funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn't there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I've heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

some of their communities that somebody collated (I don't think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

[–] hrrrngh@awful.systems 5 points 4 months ago

ah seems the site doesnt show the comments, change the ones it shows and they turn up

Oh man, I've found the old LW accounts of a few weird people and they didn't have any comments. Now I'm wondering if they did and I just didn't sort it

[–] hrrrngh@awful.systems 8 points 4 months ago* (last edited 4 months ago)

Gotta love forgetting why games have these features in the first place, so accessibility features get viewed as boring stuff you need to subvert and spice up. also reminds me of how many games used to (and continue to) include filters for simulating colorblindness as actual accessibility settings because all the other games did that. Like adding a "Deaf Accessibility" setting that mutes the audio.

Demon Souls didn't have a pause mechanic (maybe because of technical or matchmaking problems, who knows), so clearly hard games must lack a functioning pause feature to be good. Simple. The less pause that you button, the more Soulsier it that Elden when Demon the it you Ring. Our epic new boss is so hard he actually reads the state of the tinnitus filter in your accessibility settings, and then he

[–] hrrrngh@awful.systems 9 points 4 months ago

Sadly I misremembered and this one wasn't from LW but I'll share it anyway. I think I had just finished reading a bunch of the "Most effective aid for Gaza?" reddit drama which was like a nuclear bomb going off, and then stumbled into this shrimp thing and it physically broke me.

If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It's not just because they'd be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn't be smart later, or, in the case of the cognitively enfeebled who'd be permanently mentally stunted.

source: https://benthams.substack.com/p/the-best-charity-isnt-what-you-think

Discussion here (special mention to the comment that says "Did the human pet guy write this"): https://awful.systems/comment/5412818

view more: next ›