nightsky

joined 2 years ago
[–] nightsky@awful.systems 2 points 1 week ago

I used to think that I could avoid using open source projects which embrace the slop machines, but new it keeps getting more and more adoption, including in good and beloved projects... at this point I think I'll just have to accept and ignore it, or otherwise I'd have to play endless whack-a-mole with stuff all over my operating systems :(

[–] nightsky@awful.systems 14 points 1 week ago

If you’re still on the fence about AI, you have to take it seriously now.

But... why?

Always remember that Nobel disease is a thing.

The one I often think about is the person who invented PCR and then later claimed to have had an encounter with a fluorescent talking raccoon of possibly extraterrestrial origin.

[–] nightsky@awful.systems 5 points 1 week ago

Oooh I wish this project a lot of success!

Zed is interesting but the project’s very pro-AI stance keeps me away from it. So a fork without that stuff is great, hope that works out longer-term.

[–] nightsky@awful.systems 13 points 2 weeks ago

ranked according to how much AI is in your code

Truly the greatest idea since "rank developers by lines of code written".

[–] nightsky@awful.systems 30 points 2 weeks ago (7 children)

404 Media: Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox

Yue also shared screenshots of her WhatsApp chat with the OpenClaw agent, where she implores it to “not do that,” “stop, don’t do anything,” and “STOP OPENCLAW.”

This is very serious computing and we must all take it very seriously.

[–] nightsky@awful.systems 16 points 3 weeks ago (4 children)

Altman:

“People talk about how much energy it takes to train an AI model. But it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart," the OpenAI CEO told The Indian Express this week.

I would have liked to ask back, how much more food does he require? Gosh, someone offer him an energy bar!

[–] nightsky@awful.systems 5 points 4 weeks ago (3 children)

What kind of tasks are on the agenda?

[–] nightsky@awful.systems 12 points 1 month ago (2 children)

Ugh, I'm so fucking tired of this shit.

I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, I'm sure it'll find some stuff, and some of it might be legit issues.

But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, it's supposed to bring about dystopian sci-fi alien wars?

Why are people hyped about that?

(Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)

[–] nightsky@awful.systems 3 points 1 month ago

Yesss, and it's still worth playing today!

[–] nightsky@awful.systems 16 points 1 month ago

Even if you've never heard of him before and know nothing else about him... this short tweet alone tells so much about what kind of person he is.

[–] nightsky@awful.systems 7 points 1 month ago* (last edited 1 month ago) (2 children)
[–] nightsky@awful.systems 11 points 1 month ago (3 children)

Very impressed with this comment from the creator of the Zig programming language, regarding dealing with AI slop submissions, and generally about LLMs for coding.

I should look into Zig again! Technically, I've always leaned more towards Rust, because I like its more uncompromising approach to safety, while Zig always seemed to me a bit more middle-of-the-road on that. But I've been disappointed about how wide-spread LLM usage has become in Rust circles, I fear that its culture might tip over in favor of slop. (But it's not there yet and I hope it won't happen!)

Anyway, I'm ordering the "Introduction to Zig" book...

view more: next ›