fullsquare

joined 3 months ago
[–] fullsquare@awful.systems 4 points 3 weeks ago

oh no what will we do, the open source leaded gasoline was released. the genie is out of the bottle, even if you ban it you'll still have people using it locally

[–] fullsquare@awful.systems 4 points 3 weeks ago* (last edited 3 weeks ago)

You don’t read books for that though. Does this person think books are just sequences of facts you’re supposed to memorise?

I think i have something shaped like counterexample. Large literature reviews and compilations of data tables and such can work like this, and grepping them will get you a feel what is possible and a single practical example per, but even then you're supposed to read them in order to get not only what is possible, but also what is not (or at least what wasn't tested) and what fails and how and why. Actually reading through also gives you a bigger picture and allows for drawing your own conclusions ofc like you notice

Don’t you ever read something and go “oh, I never even thought about this”, “I didn’t know this was a problem”, “I wouldn’t have thought of this myself”. If not then what the fuck are you reading??

even then feeding them to chatbot is valleybrain nonsense because grep will be more than enough and much faster, and you naturally know what's inside only after reading it

even then, just having right snippet is not enough because presumably result would be only apparent after testing irl, or perhaps building a model or simulation or what have you. even then, getting to the point where you need to do any of that requires degree of curiosity and ability to put information from different sources together that would exclude promptfondlers. it's like these people try on purpose to think as little as possible

[–] fullsquare@awful.systems 5 points 3 weeks ago

solzhenitsyn is pretty sus too, with all that him being orthodox fundamentalist, fan of tsar, panslavic antisemite, 2000s putin fan (died three days into russian invasion of georgia), proponent of enlargement of russia to include "sufficiently russified" parts of belarus, ukraine and kazakhstan and therefore opponent of ukranian independence; also

Solzhenitsyn made a speaking tour after Francisco Franco's death, and "told liberals not to push too hard for changes because Spain had more freedoms now than the Soviet Union had ever known."

In 1983 he met Margaret Thatcher and told her "the German army could have liberated the Soviet Union from Communism but Hitler was stupid and did not use this weapon"

Regarding Ukraine he wrote “All the talk of a separate Ukrainian people existing since something like the ninth century and possessing its own non-Russian language is recently invented falsehood” and "we all sprang from precious Kiev".

Solzhenitsyn was a supporter of the Vietnam War and referred to the Paris Peace Accords as 'shortsighted' and a 'hasty capitulation'.

Solzhenitsyn was critical of NATO's eastward expansion towards Russia's borders and described the NATO bombing of Yugoslavia as "cruel" [...] Solzhenitsyn accused NATO of trying to bring Russia under its control; he stated that this was visible because of its "ideological support for the 'colour revolutions' and the paradoxical forcing of North Atlantic interests on Central Asia"

(all from wikipedia entry on him)

it's a little wonder that american altright embraced his writings

[–] fullsquare@awful.systems 24 points 3 weeks ago

chatbots really are leaded gasoline for zoomers

[–] fullsquare@awful.systems 3 points 3 weeks ago

it is some global anomaly that couple of biggest companies are essentially running on ad revenue (especially facebook and google)

wouldn't it make more sense if that title went to company that is, idk, in food or construction or energy or mining business

[–] fullsquare@awful.systems 6 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years

[–] fullsquare@awful.systems 1 points 4 weeks ago

i think you've got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there's more

[–] fullsquare@awful.systems 19 points 4 weeks ago

is the evil funding man going to eat the gimp pepper

[–] fullsquare@awful.systems 9 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

it's maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/

the gist:

There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]

Here’s Eyal’s “Hook Model”:

First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.

Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.

chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don't use it you'll fall behind and then you'll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that's why they get so unserious about their chatbot use

and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it's entirely different rabbit hole

like with any other bubble, money for it won't last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good

[–] fullsquare@awful.systems 9 points 4 weeks ago

iirc L-aminoacids and D-sugars, that is these observed in nature, are very slightly more stable than the opposite because of weak interaction

probably it's just down to a specific piece of quartz or soot that got lucky and chiral amplification gets you from there

also it's not physics, or more precisely it's a very physicy subbranch of chemistry, and it's done by chemists because physicists suck at doing chemistry for some reason (i've seen it firsthand)

view more: ‹ prev next ›