Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!
Showerthoughts
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
I had a poster in ‘86 that I wanted to come alive.
As long as we can't even define sapience in biological life, where it resides and how it works, it's pointless to try and apply those terms to AI. We don't know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
Pointless and maybe a little reckless.
We don't know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don't think LLMs will create intelligence, but I don't think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it's like to be - not only don't we know what causes it or how it works, but we have no way of measuring it either. There's zero evidence for it in the entire universe outside of our own subjective experience of it.
Painting?
"LLMs are a blurry JPEG of the web" - unknown (I've heard it as an unattributed quote)
The example I gave my wife was "expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school"
Except ... being alive is well defined. But consciousness is not. And we do not even know where it comes from.
Viruses and prions: "Allow us to introduce ourselves"
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I'm vastly oversimplifying and I'm not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can't scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
I can define "LLM", "a painting", and "alive". Those definitions don't require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it's alive.
I'm not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it's like Ebert claiming that, "Video games can never be art".
Absolutely everything requires assumptions, even our most objective and "laws of the universe" type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.
Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:
Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative"
The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time. Maybe it's a good thing they aren't like us.
Alternative way to phrase it, we don't train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, "they" is anthropomorphizing LLM AI much more than it deserves, it's not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data
Sometimes. I feel like LLM technology and it's relationship with humans is a symptom of how poorly we treat each other.
The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time
Elon is trying really with Grok, tho.
I don't expect it. I'm going to talk to the AI and nothing else until my psychosis hallucinates it.
People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don't think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn't believe it experiences or feels anything as I do, even though it claims to do so. Especially if it's based on an LLM, since they are superficial imitations by design.
Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.
The Eliza effect
It's achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.
Or boredom and starvation
Good showering!
The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even tried to direct that. Quite the opposite, everything has been trying to kill you from the very start.
Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.
Fair and flawless comparison. I've got nothing to add.
They have invented a thing that needs someone to want something for it to do it. We have yet to see an artificial EGO. An AEGO.