this post was submitted on 17 Nov 2025
28 points (100.0% liked)
SneerClub
1206 readers
5 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So for the record i think it would be reasonable to say, if there was ever a machine that processes information in a way that human brain (or better, entire nervous system) does, at comparable speed, then that machine would be thinking in some way. Focusing on byproducts would be a bit like saying that the point of running weather simulation is waste heat
that said, this is something that our current techbro overlords aren't even pretending to be doing. ANNs are gross caricature of actual brains that barely resembles them in function of its parts or architecture. if all transmission of neural signals went through ionotropic receptors only and dendrites weren't a thing, then it would be sorta accurate, but this is not what we have irl
there's a lot of this with neurons, there's a way to process some of incoming information for free just by using physical properties of dendrites, that would in silicon require some extra processing per neuron to pull off. ANNs ignore cable theory entirely, these neurons are modeled as pointlike. irl there might be more than one threshold, too
another one is that all connections are assumed to be happening through synapses, and appear to be ionotropic-like. current goes in, current goes out, ~~you can't explain that~~ in biological neurons neither of these are true. on second point, first difference is timescale, as it can be wildly variable but can start later and last longer, but activation of ion channel influences potential instantly. another and more important imo is that there's a lot of effects that aren't related directly to transmission of signal in current pulse, side effects if you will. activation, or lack of activation of metabotropic receptor can do some other interesting things, like internalize receptor that was just triggered, phosphorylate some kinase which changes how active it is, cause some receptor to stick to another protein, or even alter gene expression in some way, all of which are likely to change how neuron responds to that stimulus in the future. that is, real neurons have a lot of internal state that we already know is pretty important, and very crude tools to manipulate it directly are already used as pharmaceuticals (valproate), or indirectly (some third of pharmaceuticals do something with GPCRs)
another thing is signal transmission that does not depend on synapses. there's couple dozens of signalling peptides in brain, that are just released out and diffuse away, binding to whatever receptor these can find on their way and getting decomposed constantly. say, for example, that there's a neuron that releases big dynorphin. it's a peptide that binds to kappa-opioid receptor, and it's big, so it diffuses away slowly, and there will be a bubble where kappa-opioid receptors nearby will get activated, until it diffuses away below concentration that is relevant or gets decomposed. if there's instead, or additionally, some dynorphin A and B, which are half its size, then these will diffuse away faster, giving a bubble of larger radius and shorter duration. (whether one or another gets produced depends on intracellular calcium concentration, among other things). in ANN that would require adding some extra weight per every pair of peptide releasing neuron and neuron with receptor for that peptide, with weights dependent on distance. that's probably will be a lot, because even if KOR is not particularly common receptor, there's a lot of different peptides and proteins that behave this way. all of these bind to metabotropic receptors, and these have lots of unusual effects, including formation of new synapses
another way that synapse count undersells complexity of biological brains is that autoreceptors exist, which means that you can conservatively double number of synapses when comparing them to weights in ANNs, with every extra synapse pointing to the same neuron it came from. this has a big part in what makes signals to stop, and also is implicated in learning. from what i understand, this is not even a thing that ANN architectures allow. nature has no obligation to be efficient to simulate on blackwell and instead of neat prismatic slabs of neurons that only take signals from layer before and only feed to layer after there are in real brains structures of variable depths and plenty of loops that change their behavior slightly with every pass
there's more. ANNs use some imaginary single neurotransmitter, i'm not sure how bad simplification it really is, ANNs average all of impulses into some continuous time-independent smudge, this is not very biological, and overall i think that between incoming ai winter fallout, recession, end of moore's law and severe effects of climate change that will set in before next ai spring can happen, machine like this will never get built