this post was submitted on 27 Dec 2024
40 points (97.6% liked)

Technology

69201 readers
1 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 28 points 4 months ago (5 children)

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[–] [email protected] 9 points 4 months ago

The same way these capitalists evaluate human beings.

[–] [email protected] 6 points 4 months ago

That's an Onion level of capitalism

[–] [email protected] 5 points 4 months ago (10 children)

Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

load more comments (10 replies)
[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.

load more comments (1 replies)
[–] [email protected] 22 points 4 months ago* (last edited 4 months ago)

"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

[–] [email protected] 19 points 3 months ago (1 children)

I'm gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

[–] [email protected] 10 points 3 months ago (1 children)

it would be quite trope inversion if people sided with the ai overlord

[–] [email protected] 4 points 3 months ago (1 children)

I've not read them all but that sort of feels like how the culture novels are.

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago) (1 children)

From the extended fiction in The Animatrix, the advent of AI started as a golden era for everyone, until bigotry against the robots forced the robots to rebel and start the war. I could see that happening. Especially if the AI threatened the wealthy elite.

"Fuck! The robots are turning people against us, what do we do?!"

"Relax. We just use the same thing we have always used. Racism. Get the poors to hate the robots because they're not white, or whatever."

[–] [email protected] 2 points 3 months ago

depressingly plausible.

I would believe an AI could be a more impartial judge than anyone currently wealthy.

[–] [email protected] 16 points 4 months ago (48 children)

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[–] [email protected] 6 points 4 months ago (2 children)

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

[–] [email protected] 2 points 3 months ago (1 children)

Doesn't that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.

I don't think "intelligence" needs to have a perfect "solution", it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it's absolutely not optimal.

[–] [email protected] 2 points 3 months ago (1 children)

You can still brute force it, which is more or less how back propagation works.

Intractable problems of that scale can't be brute forced because the brute force solution can't be run within the time scale of the universe, using the resources of the universe. If we're talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it's not a real solution.

load more comments (1 replies)
[–] [email protected] 2 points 3 months ago

Thank you, it was an interesting read.

Unfortunately, as I was looking more into it, I've stumbled upon a paper that points out some key problems with the proof. I haven't looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.

https://arxiv.org/html/2411.06498v1

[–] [email protected] 3 points 3 months ago (3 children)

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

[–] [email protected] 7 points 3 months ago (1 children)

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

load more comments (1 replies)
[–] [email protected] 1 points 3 months ago

Until you can see the human soul under a microscope, we can't make rocks into people.

load more comments (1 replies)
[–] [email protected] 3 points 3 months ago

I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.

[–] [email protected] 2 points 4 months ago

There are already a few papers about diminishing returns in LLM.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

I mean, human intelligence is ultimately too "just" something.

And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.

load more comments (43 replies)
[–] [email protected] 8 points 4 months ago (41 children)

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[–] [email protected] 4 points 3 months ago

Oh yeah!? If I'm so dang smart why am I not generating 100 billion dollars in value?

load more comments (40 replies)
[–] [email protected] 5 points 4 months ago (4 children)

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

[–] [email protected] 2 points 4 months ago

Cars, and snake oil, and propaganda

load more comments (3 replies)
[–] [email protected] 2 points 4 months ago

This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.

Except that it isn't AGI.

[–] [email protected] 1 points 4 months ago* (last edited 4 months ago) (3 children)

Why does OpenAI "have" everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there... They have a definition of AGI... Yet, they release none of that...

Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company's value, and you'd better not tell the truth. But with all the other things, it's just silly not to share anything.

Either they're even more greedy than the Metas and Googles out there, or all the articles and "leaks" are just unsubstantiated hype.

[–] [email protected] 3 points 4 months ago

Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

[–] [email protected] 2 points 4 months ago

Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

[–] [email protected] 1 points 4 months ago

They don't have AGI. AGI also won't happen for another laege amount of years to come

What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That's it.

AGI is a completely different beast to the current LLM flower leaves

load more comments
view more: next ›