swlabr

joined 2 years ago
[–] swlabr@awful.systems 10 points 1 month ago (4 children)

In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”

text in imageCan ChatGPT pick every 3rd letter in "umbrella"?

You'd expect "b" and "I". Easy, right?

Nope. It will get it wrong.

Why? Because it doesn't see letters the way we do.

We see:

u-m-b-r-e-l-l-a

ChatGPT sees something like:

"umb" | "rell" | "a"

These are tokens — chunks of text that aren't always full words or letters.

So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.

Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.

Why does this matter?

Because the better we understand how LLMs think, the better results we'll get.

[–] swlabr@awful.systems 24 points 1 month ago (1 children)

MFs are boiling the oceans to reinvent cold reading

[–] swlabr@awful.systems 7 points 1 month ago* (last edited 1 month ago)

A real modest {~~brunch~~|bunch}

[–] swlabr@awful.systems 15 points 1 month ago (4 children)

Just thinking about how I watched “Soylent Green” in high school and thought the idea of a future where technology just doesn’t work anymore was impossible. Then LLMs come and the first thing people want to do with them is to turn working code into garbage, and then the immediate next thing is to kill living knowledge by normalising people relying on LLMs for operational knowledge. Soon, the oceans will boil, agricultural industries will collapse and we’ll be forced to eat recycled human. How the fuck did they get it so right?

[–] swlabr@awful.systems 5 points 1 month ago* (last edited 1 month ago)

If I had my druther’s I’d make my own hosting and call it “UnaGit”, and pretend it’s unagi/eel themed, when it is actually teddy K themed

[–] swlabr@awful.systems 9 points 1 month ago

NASB: A question I asked myself in the shower: “Is there some kind of evolving, sourced document containing all the reasons why LLMs should be turned off?” Then I remembered wikis exist. Wikipedia doesn’t have a dedicated “criticisms of LLMs” page afaict, or even a “Criticisms” section on the LLM page. RationalWiki has a page on LLMs that is almost exclusively criticisms, which is great, but the tone is a few notches too casual and sneery for universal use.

[–] swlabr@awful.systems 8 points 1 month ago

Someone should write a script that estimates how much time has been spent re-fondling LLMPRs on Github.

[–] swlabr@awful.systems 24 points 1 month ago (2 children)

you all joke, but my mind is so expanded by stimulants that I, and only I, can see how this dogshit code will one day purchase all the car manufacturers and build murderbots

[–] swlabr@awful.systems 3 points 1 month ago (1 children)

The title + thumbnail makes it look like you went to the olympic village while it was at its maximum output for bangin’. Which is funny. Will read later

[–] swlabr@awful.systems 13 points 1 month ago

“This thing we don’t understand yet is probably very simple and easy to replicate and I say this as someone who does not understand the thing yet because once again, nobody does!” - All “futurist” “genius” “thought leaders”

[–] swlabr@awful.systems 26 points 1 month ago

An AGI could microwave a burrito so hot that not even the AGI, in its omnipotence, could eat it

[–] swlabr@awful.systems 10 points 1 month ago (9 children)

Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:

“Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”

The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.

view more: ‹ prev next ›