V0ldek

joined 2 years ago
[–] V0ldek@awful.systems 9 points 5 days ago (1 children)

can we cancel Mozilla yet

Sure! Just build a useful browser not based on chromium first and we'll all switch!

[–] V0ldek@awful.systems 9 points 5 days ago

Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it

[–] V0ldek@awful.systems 9 points 5 days ago

Here's a little lesson in trickery This is going down in history If you wanna be a sneerer number one You have to chase a lesswronger on the run!

[–] V0ldek@awful.systems 8 points 5 days ago (1 children)

Guess who's #1

Taylor Swift

[–] V0ldek@awful.systems 11 points 5 days ago

I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.

[–] V0ldek@awful.systems 5 points 1 week ago (1 children)

Thank you for your service o7

[–] V0ldek@awful.systems 17 points 1 week ago (3 children)
[–] V0ldek@awful.systems 15 points 4 weeks ago (1 children)

Is it a single person or a worker co-op? Their copyright is sacred.

Is it a corporation? Lol, lmao, and also yarrr

[–] V0ldek@awful.systems 3 points 1 month ago

I saw like a couple articles and a talk about Bell's theorem 5 years ago and I immediately clocked this as a vast, vast oversimplification

[–] V0ldek@awful.systems 3 points 1 month ago (2 children)

They already had the Essential thing in the Nothing 3, but funnily enough, when I was shopping for a phone, it looked like the least obtrusive and annoying "AI feature" across the board, because every single fucking phone is now "AI powered" or whatever the shit.

But if they turn their OS into "AI native" and it actually sucks ass then great, I don't think there's literally any non-shitty tech left with Framework turning fash.

[–] V0ldek@awful.systems 4 points 1 month ago

I still refuse to learn what an ezra is, they will have to drag my ass to room 101 to force that into my brain

[–] V0ldek@awful.systems 7 points 1 month ago (1 children)

Happy that we graduated from making military decisions based on what the Oracle of Delphi hallucinated to making military decisions based on what Oracle® DelPhi® Enterprise hallucinated

 

This is a nice post, but it has such an annoying sentence right in the intro:

At the time I saw the press coverage, I didn’t bother to click on the actual preprint and read the work. The results seemed unsurprising: when researchers were given access to AI tools, they became more productive. That sounds reasonable and expected.

What? What about it sounds reasonable? What about it sounds expected given all we know about AI??

I see this all the time. Why do otherwise skeptical voices always have the need to put in a weakening statement like this. "For sure, there are some legitimate uses of AI" or "Of course, I'm not claiming AI is useless" like why are you not claiming that. You probably should be claiming that. All of this garbage is useless until proven otherwise! "AI does not increase productivity" is the null hypothesis! It's the only correct skeptical position! Why do you seem to need to extend benefit of the doubt here, like seriously, I cannot explain this in any way.

1
submitted 10 months ago* (last edited 10 months ago) by V0ldek@awful.systems to c/freeasm@awful.systems
 

I'm looking for recommendations of good blogs for programmers. I've been asked about what I would recommend by younger folks a few times these past few months and I realised I don't really have a good list that I could just share with them.

What I'm interested in are blogs that don't focus specifically on any particular tech but more things like Coding Horror that are just for devs in general. They don't have to be for beginners. It'd also be interesting to see which of those are most popular in our little circle, so please upvote comments that contain recommendations you agree with.

I'm implicitly assuming stuff shared by folks here is going to be sensible, well-written blogs, and not some AI shill nonsense or other tech grift.

Note that I'm specifically interested in the text medium, podcasts or YT not so much.

 

Turns out software engineering cannot be easily solved with a ~~small shell script~~ large language model.

The author of the article appears to be a genuine ML engineer, although some of his takes aged like fine milk. He seems to be shilling Google a bit too much for my taste. However, the sneer content is good nonetheless.

First off, the "Devin solves a task on Upwork" demo is 1. cherry picked, 2. not even correctly solved.

Second, and this is the absolutely fantastic golden nugget here, to show off its "bug solving capability" it creates its own nonsensical bugs and then reverses them. It's the ideal corporate worker, able to appear busy by creating useless work for itself out of thin air.

It also takes over 6 hours to perform this task, which would be reasonable for an experienced software engineer, but an experienced software engineer's workflow doesn't include burning a small nuclear explosion worth of energy while coding and then not actually solving the task. We don't drink that much coffee.

The next demo is a bait-and-switch again. In this case I think the author of the article fails to sneer quite as much as it's worthy -- the task the AI solves is writing test cases for finding the Least Common Multiple modulo a number. Come on, that task is fucking trivial, all those tests are oneliners! It's famously much easier to verify modulo arithmetic than it is to actually compute it. And it takes the AI an hour to do it!

It is a bit refreshing though that it didn't turn out DEVIN is just Dinesh, Eesha, Vikram, Ishani, and Niranjan working for $2/h from a slum in India.

 

I'm not sure if this fully fits into TechTakes mission statement, but "CEO thinks it's a-okay to abuse certificate trust to sell data to advertisers" is, in my opinion, a great snapshot of what brain worms live inside those people's heads.

In short, Facebook wiretapped Snapchat by sending data through their VPN company, Onavo. Installing it on your machine would add their certificates as trusted. Onavo would then intercept all communication to Snapchat and pretend the connection is TLS-secure by forging a Snapchat certificate and signing it with its own.

"Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them," Facebook CEO Mark Zuckerberg wrote in a 2016 email to Javier Olivan.

"Given how quickly they're growing, it seems important to figure out a new way to get reliable analytics about them," Zuckerberg continued. "Perhaps we need to do panels or write custom software. You should figure out how to do this."

Zuckerberg ordered his engineers to "think outside the box" to break TLS encryption in a way that would allow them to quietly sell data to advertisers.

I'm sure the brave programmers that came up with and implemented this nonsense were very proud of their service. Jesus fucking cinammon crunch Christ.

view more: next ›