Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it
Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it
Here's a little lesson in trickery This is going down in history If you wanna be a sneerer number one You have to chase a lesswronger on the run!
Guess who's #1
Taylor Swift
I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.
There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.
In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.
Thank you for your service o7
Is it a single person or a worker co-op? Their copyright is sacred.
Is it a corporation? Lol, lmao, and also yarrr
I saw like a couple articles and a talk about Bell's theorem 5 years ago and I immediately clocked this as a vast, vast oversimplification
They already had the Essential thing in the Nothing 3, but funnily enough, when I was shopping for a phone, it looked like the least obtrusive and annoying "AI feature" across the board, because every single fucking phone is now "AI powered" or whatever the shit.
But if they turn their OS into "AI native" and it actually sucks ass then great, I don't think there's literally any non-shitty tech left with Framework turning fash.
I still refuse to learn what an ezra is, they will have to drag my ass to room 101 to force that into my brain
Happy that we graduated from making military decisions based on what the Oracle of Delphi hallucinated to making military decisions based on what Oracle® DelPhi® Enterprise hallucinated
Sure! Just build a useful browser not based on chromium first and we'll all switch!