this post was submitted on 12 Apr 2026
371 points (98.4% liked)

News

37131 readers
2160 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.

There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.

The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.

you are viewing a single comment's thread
view the rest of the comments
[–] o_oli@lemmy.world 45 points 2 days ago (3 children)

Once you use AI enough you start to peer behind the curtain and see how it's all just a magic trick and not actually magic like it seems to begin with. So yeah I think its unsurprising people would come to this conclusion.

[–] jj4211@lemmy.world 1 points 14 hours ago

I'm surprised it takes so long, honestly. I keep seeing a progression of people who think they uniquely figured out how to avoid the pitfalls of GenAI mistakes and then getting hit with the same mistakes everyone gets hit with and having shocked Pikachu face when the LLM does something it "promised" not to do. They will not believe anyone telling them that LLM generating the phrase "I commit to avoiding deleting any data" doesn't mean it actually committed to anything. Even when that fails, they think the LLM saying "I have made a mistake, and I have learned from it and I won't allow it to happen again" means something, and shocked again as, surprise, that also doesn't mean anything.

Of course, just last week someone was asking me if I had tried some GenAI stuff and they had been thinking about trying it. Shockingly some people have managed to avoid it and I guess they have more folks to burn through...

[–] partofthevoice@lemmy.zip 7 points 1 day ago

Read up on Information Theory. These machines are glorified autocomplete engines, built by exploiting redundancy within language. Like… you give it a billion sentences, and you ask it “what comes after ‘the dog’?” It says something random like “limousine.” You penalize it for the wrong answer, which means it updates its weights to ever so slightly point further away from such nonsense. You then do this hundreds of millions of times, and suddenly the weights start to be pretty well tuned. Input “the dog” might output “sat” now. Good job.

I mean… there’s definitely more fancy stuff going on. But this seems to be something fundamental about it all. Given as much, I can’t help but feel like… yeah… they do suck at what they’re most often used for, and it’s not surprising why.

[–] Paddzr@lemmy.world 5 points 2 days ago (2 children)

It's a tool like any other . it has it's use .

[–] dansemacabreingalone@lemmy.dbzer0.com 7 points 1 day ago (5 children)
[–] compcube@lemmy.world 9 points 1 day ago

Generating BS that upper-level management loves to skim through?

[–] Hakuso@scribe.disroot.org 4 points 1 day ago

An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu...

Which you won't be able to run because RAM is $500 now.

There are valid uses for LLMs, but I think everyone who calls it "AI" is definitely a scam.

[–] TubularTittyFrog@lemmy.world 8 points 1 day ago (1 children)

the same as lorem ipsum. it's great for filling up space with text.

Okay it can do that. That is valid.

[–] leftzero@lemmy.dbzer0.com 5 points 1 day ago (1 children)

Filling up Jensen Huang's pockets. Also Sam Altman and others, but mostly Huang.

[–] dansemacabreingalone@lemmy.dbzer0.com -1 points 1 day ago (1 children)
[–] leftzero@lemmy.dbzer0.com 4 points 1 day ago

I didn't say Open AI's or Nvidia's, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).

I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.

The economy will end up worse than in the 1929 crash, sure, but not for those bastards.

So, yeah, it can, and it is, because it's what the whole scam was designed for.

[–] CanIFishHere@lemmy.ca -1 points 1 day ago (2 children)

I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .

[–] dansemacabreingalone@lemmy.dbzer0.com 10 points 1 day ago (1 children)

I

Holy shit thats terrifying, your buddy is criminally negligent. It cant do that reliably. It doesnt do 'reliably'.

[–] CanIFishHere@lemmy.ca 2 points 1 day ago (1 children)

I thought it would be clear because it's a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.

[–] hark@lemmy.world 2 points 14 hours ago* (last edited 14 hours ago) (1 children)

If the AI has missed risks and he didn't bother checking (since this is where the added productivity comes from) then the company gets to enjoy those risks.

[–] CanIFishHere@lemmy.ca 2 points 12 hours ago

Mistakes cost the company money, no doubt about that.

[–] quack@lemmy.zip 9 points 1 day ago* (last edited 1 day ago) (1 children)

That’s horrifying. I really hope he’s triple-checking everything.

[–] CanIFishHere@lemmy.ca 2 points 1 day ago* (last edited 11 hours ago) (1 children)

He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn't have the capability of delivering. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can't commit to that because it could bankrupt the company.

A script of control f's would be just as useful and more reliable.

[–] o_oli@lemmy.world 12 points 2 days ago (1 children)

For sure, it's amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.

Also yeah I'm specifically talking about LLMs because I think that's 95%+ of AI usage right now in volume.

[–] Grandwolf319@sh.itjust.works 15 points 2 days ago* (last edited 2 days ago) (1 children)

For sure, it's amazing for some things.

I’m still skeptical about this.

Most of those things are usually due to the alternative being intentionally bad.

Like google becoming bad, or bad company documentation, or corporate speak emails that’s could just be straight to the point.

[–] o_oli@lemmy.world 1 points 1 day ago

Maybe? But to give an example of how I think it's been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just...ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it's like an admin assistant, and being inaccurate is totally unimportant, it's a good tool.

Maybe that's a really niche example but it's one of the few cases where I can see long term use with zero downsides.

Ultimately it's powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it's fine.