this post was submitted on 25 Feb 2026
222 points (96.2% liked)

Technology

82296 readers
3491 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 42 comments
sorted by: hot top controversial new old
[–] inclementimmigrant@lemmy.world 142 points 1 week ago* (last edited 1 week ago)

Really reinforces my belief that the stockmarket is driven by idiots.

Reminds me of this old Kal cartoon:

Granted AI will probably doom us all but not how the substance post says it will.

[–] TropicalDingdong@lemmy.world 117 points 1 week ago (7 children)

I just..

Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don't do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.

I've tried all the code assistants. They're fucking crap. There's no building an economy around these things. You'll just get dogshit. There's no building institutions around these things.

[–] WanderingThoughts@europe.pub 41 points 1 week ago

Heh, that's the joke going around now.

AI works, it replaces workers, we lose our jobs.

AI doesn't work, bubble pops, we lose our jobs.

[–] Zwuzelmaus@feddit.org 13 points 1 week ago

They're fucking crap. There's no building an economy around these things.

You are right in every serious part of the world.

But add "venture capital" to the equation and it works out stronger than anything else so far.

[–] Buddahriffic@lemmy.world 6 points 1 week ago

If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn't trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).

You'll probably get the desired output if you're using one of the good models.

Now ask it to review the code or optimize it.

If it was a good coding AI, this step shouldn't involve much, as it would have been applying the same reasoning during the code writing process.

But in my experience, this isn't what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from "write code" to "optimize code", which affects the correlations involved. There is no "write optimal code" because it's trained on everything and the kitchen sink, so you'll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it's presented in a "discovery" format where a prof intended to talk about why this slide is bad but didn't include that on the slide itself).

[–] partofthevoice@lemmy.zip 6 points 1 week ago (1 children)

I think it’s supposed to work like, “well, even if you are right about the massive utility of AI, is that still what we should be aiming for?”

It gets around the combative “you’re wrong, AI is garbage” argument. The people hoisting AI because they believe, even if it does suck, it’ll get better… those people can probably understand this argument much more easily.

[–] ageedizzle@piefed.ca 6 points 1 week ago

It sucks and its at the point now where were hitting diminishing returns so I’m not sire if it sill get better

[–] LincolnsDogFido@lemmy.zip 2 points 1 week ago (1 children)

Your job sounds really cool! How likely is Alberta to be on fire again this year?

[–] TropicalDingdong@lemmy.world 1 points 1 week ago (1 children)
[–] LincolnsDogFido@lemmy.zip 2 points 1 week ago (1 children)
[–] TropicalDingdong@lemmy.world 14 points 1 week ago (1 children)

Well for one, that area already burned pretty recently. So its pretty unlikely to burn again any time soon.

But as part of a larger picture:

The area does experience fire-weather conditions for some portion of the year:

Here we're looking at HDWI (hot dry windy index), where a "loose" definition of fire weather is if HDWI is above 200. HDWI is based on a few factors, namely, how hot it is, how dry it is, and how fast the air is moving. Hot dry air moving quickly = fire weather.

The number of fire weather days per year has been increasing, and in very recent years (the past decade) the rate of change has increased, and become statistically signficant:

So its not a particularly fire prone area, but its getting worse, and its getting worse at a faster rate.

That would be the first part of the analysis I would run. After that, we'd look for historically "anomalous" periods. Its not enough to look at averages; that will wash over important features in the data. We need to look for specific periods where fire weather manifests.

This is another way of thinking about fire risk. Here we're going to count the amount of time, after 12 hours, that an area is in sustained fire-weather conditions. Basically, a bit of time in bad conditions isn't the end of the world, but as you stay in fire weather conditions, fire risk increases exponentially (as plants/ fuels continue to dry out).

If I were writing an insurance product for you, I would count the number of events in a given magnitude bucket and give you a risk rating. Here, licking my thumb and sticking it in the air, I would say.. "not that bad".

Much of my work is around modeling in the wilderness urban interface. You picked an almost all wilderness area. Since there are no structures, I cant do the next analysis, but it would looks something like this:

Most of my work is about figuring out what the impacts of wildfire on the built environment are going to be. Also, the free structure dataset I have access to doesn't cover Canada and I'm not going to spend money buying the structures for you (unless you REALLY want me to).

Those first figures are all specific to the coordinates you provided. The final figure is just an example.

[–] GamingChairModel@lemmy.world 2 points 1 week ago

It's funny. I see the phrase "AI doomsday scenario" and I immediately picture devastating cascading consequences caused by someone mistakenly putting too much trust in some kind of agentic AI that does things poorly and breaks a lot of big important things.

I'm just not seeing a scenario where AI causes devastating disruption based on its own ultra competence. I'm much more scared of AI incompetence.

[–] msage@programming.dev 1 points 1 week ago

Can I subscribe to your AI posts?

[–] RedstoneValley@sh.itjust.works 65 points 1 week ago (1 children)

The scenario begins with AI agents undergoing a “jump in capability”.

Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I'm so sick of reading this bullshit. "Agentic AI" based on LLMs does not work reliably yet and very likely never will.

If you complain about bugs in traditional (deterministic) software, you ain't seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.

As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.

[–] magikmw@piefed.social 13 points 1 week ago

Not to mention agents not being immune to confabulation, what we'd call if human did it: "making shit up".

[–] baseball2020@sopuli.xyz 32 points 1 week ago (2 children)

My favourite take so far is the comparison to the introduction of the microwave. Some people really believed that they’d never have to cook again. So what we got was actually a way to make crap quality meals or reheat things when we don’t have time. This is roughly analogous to the output I get from the LLM.

[–] TheBlackLounge@lemmy.zip 24 points 1 week ago (3 children)

You need to learn how to use a microwave. It's way more versatile than people think. And more of a steamer than an oven. https://cookanyday.com/collections/recipes

(This is not an analogy about LLMs anymore.)

[–] xep@discuss.online 3 points 1 week ago

Yeah but let me present my counterpoint, the air-fryer.

[–] JayGray91@piefed.social 3 points 1 week ago

I just wanted to have a gander at some microwave recipes, but that site bombarded me with 3 full screen promotional pop up and still didn't give me any recipes because I have to tap show more results because the first few is products they're selling. At least that's my experience on mobile, using waterfox with adblock and even a DNS filter.

[–] toynbee@lemmy.world 3 points 1 week ago

Until I actually got to the link, I thought it was going to be Technology Connections.

[–] vacuumflower@lemmy.sdf.org 4 points 1 week ago

You can make a lot of things with a good microwave, but just putting something in doesn't work for that purpose, yes.

[–] CileTheSane@lemmy.ca 29 points 1 week ago (4 children)

My reaction to the article:

This was about fears AI will tank the economy? No shit it will.

Reads a little more

Wait, this is about fears AI will be so successful it tanks the economy? Complete bullshit but hey, whatever gets this bubble popped.

Instead of using DoorDash, developers – and civilians – code up their own food delivery apps, all of which compete, fragment the market, and destroy the margins of legacy businesses.

Complete fucking fantasy. Even if AI was so amazing it could code my own delivery app for me in seconds, the food still has to be delivered somehow. But yes, it AI was able to deliver on all of the promises we'd be fucked, when AI fails to deliver on all of the promises the bubble will burst and we'll be fucked. Either way stop investing in AI.

[–] Ruigaard@slrpnk.net 6 points 1 week ago

Yeah, why wouldn't I just call or text the restaurant I want to order from, instead of coding my own app 😂. What a weird example.

[–] yabbadabaddon@lemmy.zip 5 points 1 week ago

No but you don't get it. AI is such an amazing tool, it can do everything and replace everyone. Did you try my new model yet? It's blazing fast and can tell you to drive to the car wash to clean your car. It's will revolutionize the world

[–] dustyData@lemmy.world 4 points 1 week ago

The proponent is a rather successful and rational investor. This was satire, meant to evoke the idea that, if AI was all that the con men are selling, it would collapse the economy. It is not and everyone knows it, but the point is to highlight the idiocy and try to wake up people to the absurdity. I see it akin to what "a modest proposal" was. To nudge the most radical AI ideologues into understanding the dead economy and ghost GDP concepts. If the economy becomes detached from human reality, it will crumble and collapse.

Also, calling doordash a legacy business is completely cursed

[–] Gsus4@mander.xyz 23 points 1 week ago* (last edited 1 week ago) (1 children)

Lol, they sort of seem to know it's all castles on clouds and any spark e.g. a substack post could trigger the loaded spring. Yet, nobody thinks they'll be the ones holding bags of shit.

[–] WanderingThoughts@europe.pub 12 points 1 week ago (1 children)

They kind of know. The dot com crashed many companies, and also gave rise to Amazon. They're all just hoping they'll be the one that invested in the next Amazon.

[–] somethingsnappy@lemmy.world 5 points 1 week ago (1 children)

Have they made a profit yet?

[–] WanderingThoughts@europe.pub 14 points 1 week ago (2 children)

Amazon didn't make any profit for a decade and made 360 billion least year. They tell investors that AI will be the same.

[–] HakFoo@lemmy.sdf.org 9 points 1 week ago (1 children)

The difference was that Amazon knew how to make a profit, but was reinvesting into infrastructure plays and bigger fish.

If they had to, they could have been a modestly profitable bookshop in 2002. AWS and monster logistics might not have developed to put them in the 13-digit club though.

Does any AI-centric play have that fundamental fallback? The services that seem to be most effective at direct monetization, the coding tools, are typically running at huge losses. If they raised costs to cover, precious few firms will pay basically the salary of a senior dev for an emulation of an enthusiastic junior dev with an affinity for footguns.

The less enterprise-focused products-- parasocial toys, image and video gen, will likely try to dip into consumer subs and advertising, but can that generate the cash volumes these platforms demand?

[–] WanderingThoughts@europe.pub 6 points 1 week ago (1 children)

If people would always demand answers for those questions, we wouldn't have speculative bubbles. For now, everybody seems to still believe the "it's the worst it'll ever be right now" and the "just more scaling bro" answers.

[–] HakFoo@lemmy.sdf.org 1 points 5 days ago

Trying to sell consumers on "scaling solves everything" is going to be a hard sell.

If we look at general purpose computation, which had decades of actual scaling-solves-everything growth, you had two influences that made the message resonate with customers:

  • Clear existing applications where more power made the experience straightforward better. Your spreadsheet took an hour to recalculate at 8MHz and 20 minutes at 25MHz. A lot of the "bigger model" stuff is plateauing with marginal or spotty gains. If I feed another 5 Internets of data to ChatGPT, will that summarized email be that much better?

  • New applications that could be demoed on specialised low capacity hardware and scaled down to consumers as more power became available. Think of early CGI on hardware costing tens of millions, and now you can run Blender on a $149 laptop. Since most commercial AI plays are hosted services, there's not much opportunity to tease that way anymore.

[–] Passerby6497@lemmy.world 2 points 1 week ago (1 children)

How much of that profit less decade was just them reinvesting in their company as opposed to burning money like you're trapped on Everest and need every bit of heat you can get?

[–] WanderingThoughts@europe.pub 1 points 1 week ago

That's the part they didn't tell investors. Some call that the enshittification of the investment market. Lies everywhere.

[–] andallthat@lemmy.world 18 points 1 week ago* (last edited 1 week ago) (3 children)

It's almost funny how all those AI doomsday scenarios are actually meant to prop up investment in AI.

See how Amodei and Altman are usually the ones pushing these narratives on how worried they are by the incredible advancements of their respective companies' creatures. They are so, so worried about the demise of the human race and how fast it's coming.

And I sort of understand them because whatever disruption they are peddling needs to happen very fast or they will all run out of money. But what does it tell about the rest of the human race that we are actually buying into it and pouring money into creating a dystopian future?

[–] ZILtoid1991@lemmy.world 6 points 1 week ago (2 children)

It's "Iraq is using PS2s to build ICBMs" all over again...

[–] andallthat@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

It's like watching a real-life version of Avengers, but one where Tony Stark says "hey, this Thanos guy is disrupting industries here!" and teams up with... Thiel and Musk to fund his quest for the Infinity Stones. You know, we can't let China get them first!

[–] vacuumflower@lemmy.sdf.org 2 points 1 week ago

Ads back then were so cool, it felt like real magic and it was not hard to believe PS2 is that good.

[–] lost_faith@lemmy.ca 2 points 1 week ago

Just re-watched Tron last night and a scene really struck me. Dumont was talking to Lora about how since the computers are able to think the humans will stop. That scene had more impact this time through

[–] sturmblast@lemmy.world 1 points 1 week ago

Anything for a dollar