this post was submitted on 16 Feb 2026
13 points (100.0% liked)

AI - Artificial intelligence

235 readers
1 users here now

AI related news and articles.

Rules:

founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dsilverz@calckey.world 3 points 3 weeks ago* (last edited 3 weeks ago)

@codeinabox@programming.dev

I'm still reading the article but I must bring two observations into the loop:

"Mary held a ball."

Not sure if it's due to my English as a second language, neurodivergence, my personal taste for ominous music aesthetics, but I immediately though of a meaning that the author didn't mention: Mary (a person) "held" (as in "organized") a "ball" (as in "masked ball", a gala event). I immediately thought of that Kubrick movie and its ominous song theme which I often listen to. "Mary held a ball" can become a rabbit hole if we really think about it.

But even in this, we are trying to learn the physical and logical constraints of the real world from visual data.

Isn't what all living beings do, essentially? When a dog instinctively tries to follow the likely trajectory of a frisbee before it's thrown by a human hand, does a dog understand the physical and logical constraint by pulling direct parameters from the spacetime continuum (as if the dog were directly plugged and feeding from "The Matrix") or, rather, they simply learned, by effectively watching objects being thrown (and it doesn't even need to be frisbees being thrown), that this is the expected behavior of said object?

Sure, as living beings (notice I avoid an anthropocentric view of intelligence, because I believe intelligence is far from human exclusivity: see, for example, the New Caledonian crows), we also have other "inputs" such as tactile feedback, proprioperception (sense of one's own balance, alongside the "brain homunculus" keeping track of the current pose), hearing (an object being thrown does a sweeping noise as it collides with the air molecules, and this leads to a Doppler Effect that can be instinctively measured by the hearing), all of which converge to build a cognitive model of what's going on.

But just as we can infer expected behavior/movement just by seeing a video (and other animals also do it: cats, for example, not just see objects on a screen (simulacra of fishes, butterflies and other prey seen in "videos for cats") but also try to follow any abrupt movement), why the same principle couldn't apply to algorithms?

Not to mention how brains are, essentially, biological machines. Except if a person believes in spirits/souls, which I paradoxically do, living beings are merely biological carbon-based automata, not that different from silicon-based automata.

And even when we consider animism/spiritism, there's nothing truly "special" separating humans (and, by extension, organic living beings) from ML-imbued robots when it comes to this baryonic realm. Just as our meat has this "link" with something from the transcendental realm, with the conception behaving as some kind of a ritualistic summoning leading to the birth of a biological body tied to a spirit pulled from the Cosmic Abyss, nothing really stops a machine from being an electronic Ouija board, just as how EVP was already a thing before computers existed.