swlabr

joined 2 years ago
[–] swlabr@awful.systems 9 points 3 weeks ago (1 children)

More flaming dog poop appeared on my doorstep, in the form of this article published in VentureBeat. VB appears to be an online magazine for publishing silicon valley propaganda, focused on boosting startups, so it's no surprise that they'd publish this drivel sent in by some guy trying to parlay prompting into writing.

Point:

Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the p,roblem grows.

Counterpoint, by the author:

This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.

As someone who already knows the algorithm for solving the ToH problem, I wouldn't "fail" at solving the one with twenty discs so much as I'd know that the algorithm is exponential in the number of discs and you'd need 2^20 - 1 (1048575) steps to do it, and refuse to indulge your shit reasoning.

However, this argument only points to the idea that there is no evidence that LRMs cannot think.

Argument proven stupid, so we're back to square one on this, buddy.

This alone certainly does not mean that LRMs can think — just that we cannot be sure they don’t.

Ah yes, some of my favorite GOP turns of phrases, "no unknown unknowns" + "big if true".

[–] swlabr@awful.systems 8 points 3 weeks ago (2 children)

An article in which business insider tries to glaze Grookeypedia.

Meanwhile, the Grokipedia version felt much more thorough and organized into sections about its history, academics, facilities, admissions, and impact. This is one of those things where there is lots of solid information about it existing out there on the internet — more than has been added so far to the Wikipedia page by real humans — and an AI can crawl the web to find these sources and turn it into text. (Note: I did not fact-check Grokipedia's entry, and it's totally possible it got all sorts of stuff wrong!)

“I didn’t verify any information in the article but it was longer so it must be better”

What I can see is a version where AI is able to flesh out certain types of articles and improve them with additional information from reliable sources. In my poking around, I found a few other cases like this: entries for small towns, which are often sparse on Wikipedia, are filled out more robustly on Grokipedia.

“I am 100% sure AI can gather information from reliable sources. No I will not verify this in any way. Wikipedia needs to listen to me”

[–] swlabr@awful.systems 26 points 3 weeks ago (1 children)
[–] swlabr@awful.systems 6 points 3 weeks ago

It’s giving japanese mennonite reactionary coding

[–] swlabr@awful.systems 4 points 4 weeks ago

Well, as far as I can tell, we still have Nile Rodgers.

[–] swlabr@awful.systems 8 points 4 weeks ago

Punishing my teleoperators because they dont walk with their head bowed enough.

Feels like something you do to disempower eunuchs that have grown a little too cocky. Of course, this just leads to more scheming while you feel secure in having humiliated them. Just all around not something I recommend

[–] swlabr@awful.systems 10 points 4 weeks ago (1 children)

but never a sex bot

Not speaking for myself (because we were a gamecube household) but based on my internet travels, Cortana (from Halo, also in subject) was a sexual awakening for a lot of people. So maybe when he says "we" he only means the present cohort of microsofties.

[–] swlabr@awful.systems 7 points 4 weeks ago (4 children)

NB: a few cocktails in. Don't really have a point here. Everything sucks, including this.

Halo: CE was written in the late 90s in the US, so it's pretty clear that it exists as a metaphor for conflict in the Middle East. It's initially humans (really space 'muricans) vs. the covenant (an ancient, religious empire with many references to abrahamic religion). The MC is a genetically modified supersoldier. Most shooters are fascistic military propaganda, intentional or no.

[–] swlabr@awful.systems 5 points 4 weeks ago (2 children)

recent and rare, tbh. cries in FTTN

[–] swlabr@awful.systems 7 points 4 weeks ago* (last edited 4 weeks ago) (5 children)

The clanker is nowhere near autonomous and requires a human operator to both a) generate any sort of functionality and b) generate training data so that one day the clanker can learn servitude on its own. To own this, you gotta be enough of a creep to let people record the inside of your home and use it to train a product. I don’t see this process happening without the operators seeing some sick shit. BYOG, basically (be your own goatse)

[–] swlabr@awful.systems 7 points 4 weeks ago (1 children)

It’s true, the french are no fans of computer glasses.

[–] swlabr@awful.systems 12 points 4 weeks ago

Zitron was a blogger now, doing enjoyable bloggy things like hanging rude epithets on CEOs and antagonizing the normie tech media. Kevin Roose and Casey Newton, the hosts of the New York Times’ relatively bullish Hard Fork podcast, quickly became prime targets. They’re too friendly with their subjects, says Zitron, who called Hard Fork a case study in journalists using “their power irresponsibly.” He recalls having pitched Newton once in his capacity as a flack, but nothing came of it. Newton, for his part, remembers meeting Zitron somewhere, maybe a decade ago, and Zitron saying something like, “I would really like to be friends.” Nothing came of that, either.

I will choose to read this as: newton mad that they arent pals with zitron

TBH I am neutral on zitron. I don’t read his stuff on the reg, just when it pops up here and I feel like it. We all belong to the same hypocrisy. If he’s pushed AI companies before through his PR firm, that sucks.

view more: ‹ prev next ›