this post was submitted on 31 Dec 2025
120 points (86.1% liked)

Ask Lemmy

36465 readers
1067 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

im new to lemmy and i wanna know your perspective about ai

you are viewing a single comment's thread
view the rest of the comments
[โ€“] tal@lemmy.today 2 points 1 week ago* (last edited 1 week ago) (1 children)

Very bullish long term. I think that I can with say pretty good confidence that it's possible to achieve human-level AI, and that doing so would be quite valuable. I think that this will very likely be very transformational, on the order of economic or social change that occurred when we moved from the primary sector of the economy being most of what society did to the secondary sector, or the secondary sector to the tertiary sector. Each of those changed the fundamental "limiting factor" on production, and produced great change in human society.

Hard to estimate which companies or efforts might do well, and the near term is a lot less certain.

In the past, we've had useful and successful technologies that we now use everyday that we've developed using machine learning. Think of optical character recognition (OCR) or the speech recognition that powers computer phone systems. But they've often taken some time to polish (some here may remember "egg freckles").

There are some companies promising the stars on time and with their particular product, but that's true of every technology.

I don't think that we're going to directly get an advanced AI by scaling up or tweaking LLMs, though maybe such a thing could internally make use of LLMs. The thing that made neural net stuff take off in the past few years and suddenly have a lot of interesting applications wasn't really fundamental research breakthroughs on the software side. It was scaling up on hardware what we'd done in the past.

I think that generative AI can produce things of real value now, and people will, no doubt, continue R&D on ways to do interesting things with it. I think that the real impact here is not so much technically interesting as it is economic. We got a lot of applications in a short period of time and we are putting the infrastructure in place now to use more-advanced systems in place of them.

I generally think that the output of pure LLMs or diffusion models is more interesting when it comes to producing human-consumed output like images. We are tolerant of a lot of errors, just need to have our brains cued with approcimately the right thing. I'm more skeptical about using LLMs to author computer software, code


I think that the real problems there are going to need AGI and a deeper understanding of the world and thinking process to really automate reasonably. I understand why people want to automate it now


software that can code better software might be a powerful positive feedback loop


but I'm dubious that it's going to be a massive win there, not without more R&D producing more-sophisticated forms of AI.

On "limited AI", I'm interested to see what will happen with models that can translate to and work with 3D models of the world rather than 2D. I think that that might open a lot of doors, and I don't think that the technical hump to getting there is likely all that large.

I think that generative AI speech synth is really neat


the quality relative to level of effort to do a voice is already quite good. I think that one thing we're going to need to see is some kind of annotated markup that includes things like emotional inflection, accent, etc...but we don't have a massive existing training corpus of that the way we do plain text.

Some of the big questions I have on generative AI:

  • Will we be able to do sparser, MoE-oriented models that have few interconnections among themselves? If so, that might radically change what hardware is required. Instead of needing highly-specialized AI-oriented hardware from Nvidia, maybe a set of smaller GPUs might work.

  • Can we radically improve training time? Right now, the models that people use are trained using a lot of time running comoute-expensive backpropation, and we get a "snapshot" of that that doesn't really change. The human brain is in part a neural net, but it is much better at learning new things at low computational cost. Can we radically improve here? My guess is yes.

  • Can we radically improve inference efficiency? My guess is yes, that we probably have very, very inefficient use of computational capacity today relative to a human. Nvidia hardware runs at a gigahertz clock, the human brain at about 90 Hz.

  • Can we radically improve inference efficiency by using functions in the neural net other than a sum-of-products, which I believe is what current hardware is using? CPU-based neural nets used to tend to use a sigmoid activation function. I don't know if the GPU-based ones of today are doing so, haven't read up on the details. If not, I assume that they will be. But point is that introducing that was a win for neural net efficiency. Having access to that function improves efficiency in how many neurons are required to reasonably model a lot of things we'd like to do, like if you want to approximate a Boolean function. Maybe we can use a number of different functions and tie those to neurons in the neural net rather than having to approximate all of those via the same function. For example, a computer already has silicon to do integer arithmetic efficiently. Can we provide direct access to that hardware, and, using general techniques, train a neural net to incorporate that hardware where doing so is efficient? Learn to use the arithmetic unit to, say, solve arithmetic problems like "What is 1+1?" Or, more interestingly, do so for all other problems that make use of arithmetic?