swlabr

joined 2 years ago
[–] swlabr@awful.systems 11 points 2 weeks ago

Jack Dorsey seems to like throwing money at it:

Jack Dorsey, the co-founder of Twitter, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023, as well as a $10 million cash donation to a Nostr development collective in 2025.

(source: wiki)

[–] swlabr@awful.systems 5 points 2 weeks ago

Anything’s a cock ring if you’re brave enough

[–] swlabr@awful.systems 6 points 2 weeks ago

I half read, half skimmed the article. Man, what a strange, specific, and dedicated way to build buzz. This is the exact kind of weird conspiracy shit you’d expect nazi weirdos to be up to. If Indiana Jones did actual archaeology but only on the internet, this analysis would be the output. Good read.

[–] swlabr@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

This is a joke, right?

E: my enshittified brain thought that this was some kind of AI enabled smart ring that also told the time. This is kinda fun actually, tho I would never get one

[–] swlabr@awful.systems 9 points 2 weeks ago (1 children)

More flaming dog poop appeared on my doorstep, in the form of this article published in VentureBeat. VB appears to be an online magazine for publishing silicon valley propaganda, focused on boosting startups, so it's no surprise that they'd publish this drivel sent in by some guy trying to parlay prompting into writing.

Point:

Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the p,roblem grows.

Counterpoint, by the author:

This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.

As someone who already knows the algorithm for solving the ToH problem, I wouldn't "fail" at solving the one with twenty discs so much as I'd know that the algorithm is exponential in the number of discs and you'd need 2^20 - 1 (1048575) steps to do it, and refuse to indulge your shit reasoning.

However, this argument only points to the idea that there is no evidence that LRMs cannot think.

Argument proven stupid, so we're back to square one on this, buddy.

This alone certainly does not mean that LRMs can think — just that we cannot be sure they don’t.

Ah yes, some of my favorite GOP turns of phrases, "no unknown unknowns" + "big if true".

[–] swlabr@awful.systems 8 points 3 weeks ago (2 children)

An article in which business insider tries to glaze Grookeypedia.

Meanwhile, the Grokipedia version felt much more thorough and organized into sections about its history, academics, facilities, admissions, and impact. This is one of those things where there is lots of solid information about it existing out there on the internet — more than has been added so far to the Wikipedia page by real humans — and an AI can crawl the web to find these sources and turn it into text. (Note: I did not fact-check Grokipedia's entry, and it's totally possible it got all sorts of stuff wrong!)

“I didn’t verify any information in the article but it was longer so it must be better”

What I can see is a version where AI is able to flesh out certain types of articles and improve them with additional information from reliable sources. In my poking around, I found a few other cases like this: entries for small towns, which are often sparse on Wikipedia, are filled out more robustly on Grokipedia.

“I am 100% sure AI can gather information from reliable sources. No I will not verify this in any way. Wikipedia needs to listen to me”

[–] swlabr@awful.systems 26 points 3 weeks ago (1 children)
[–] swlabr@awful.systems 6 points 3 weeks ago

It’s giving japanese mennonite reactionary coding

[–] swlabr@awful.systems 4 points 3 weeks ago

Well, as far as I can tell, we still have Nile Rodgers.

[–] swlabr@awful.systems 8 points 3 weeks ago

Punishing my teleoperators because they dont walk with their head bowed enough.

Feels like something you do to disempower eunuchs that have grown a little too cocky. Of course, this just leads to more scheming while you feel secure in having humiliated them. Just all around not something I recommend

[–] swlabr@awful.systems 10 points 3 weeks ago (1 children)

but never a sex bot

Not speaking for myself (because we were a gamecube household) but based on my internet travels, Cortana (from Halo, also in subject) was a sexual awakening for a lot of people. So maybe when he says "we" he only means the present cohort of microsofties.

[–] swlabr@awful.systems 7 points 3 weeks ago (4 children)

NB: a few cocktails in. Don't really have a point here. Everything sucks, including this.

Halo: CE was written in the late 90s in the US, so it's pretty clear that it exists as a metaphor for conflict in the Middle East. It's initially humans (really space 'muricans) vs. the covenant (an ancient, religious empire with many references to abrahamic religion). The MC is a genetically modified supersoldier. Most shooters are fascistic military propaganda, intentional or no.

view more: ‹ prev next ›