this post was submitted on 03 Apr 2026
115 points (98.3% liked)

Technology

83406 readers
3085 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 47 comments
sorted by: hot top controversial new old
[–] janewaydidnothingwrong@lemmy.world 1 points 55 minutes ago

in principle this is pretty much exactly the kind of thing we should use complex algorithms for, always in tandem with medical professionals of course. in execution I'm sure we will fuck it up

[–] nonentity@sh.itjust.works 5 points 2 hours ago

Any world where a wholly imaginary concept is treated as a serious replacement for a critical function requiring specialised training is a world that doesn’t want you to exist within it.

This lobotomisation of society is manifest nihilism.

[–] girsaysdoom@sh.itjust.works 4 points 3 hours ago

New technology is released into medical field and executive board thinks "how can we harm people with this?"

[–] arcine@jlai.lu 6 points 4 hours ago (1 children)

Behold : how to kill more people, while spending more money !

[–] TheBlackLounge@lemmy.zip -1 points 3 hours ago

This is not about genAI. It's cheap old technology. It's been better than humans for a while now. The problem is legal risks.

[–] billwashere@lemmy.world 15 points 15 hours ago (2 children)

Until the hospital realizes they would be responsible for malpractice suits since there is no doctor to blame.

[–] CmdrShepard49@sh.itjust.works 9 points 15 hours ago* (last edited 15 hours ago) (1 children)

Yeah right they'll just lobby the government to make them immune from liability by saying "the free market" will take care of any shortcomings.

[–] billwashere@lemmy.world 2 points 15 hours ago

Sigh… likely true.

[–] hitmyspot@aussie.zone 3 points 14 hours ago (1 children)

Oh, no, see they will blame the doctor that was treating them and would normally get a radiologist report, but instead got an AI report.

AI is actually pretty good at x-ray interpretation but it does get it wrong, as do radiologists. The safe option is to have the radiologist review ai output.

[–] TheBlackLounge@lemmy.zip 1 points 3 hours ago

The problem is false negatives. Positive reports would still be reviewed before treatment.

AI already has less false negatives than humans. Both together is optimal but at some point you need to prioritize. A doctor looking at scans could in stead be treating a patient.

[–] Photonic@lemmy.world 22 points 20 hours ago
[–] db2@lemmy.world 44 points 1 day ago (1 children)

I can't wait for this bubble to blow up in all their dumb faces.

[–] EvilBit@lemmy.world 47 points 23 hours ago (5 children)

For what it’s worth, “AI” in this context is probably not the content-stealing Generative AI that everyone is trying to cram everywhere it doesn’t belong. This is a much more legitimate application of a similar technology.

I’m not mad about the idea of AI in radiology because it’s a really good fit. A human radiologist can’t compare a hundred similar slices and cross-correlate possible anomalies, whereas AI can. This improves detection and outcomes and is exactly where medical technology is supposed to help.

That said, I don’t think we’ll replace radiologists across the board for a long time. This will be a very useful tool and will probably reduce the number of radiologists required and modify their roles significantly, but it’ll be more like how a single worker with editing software can do work that would have required a small team in the pre-digital days of film.

[–] phutatorius@lemmy.zip 20 points 23 hours ago (2 children)

Yeah, it sounds more like ML. That's a good thing, For one thing, it's reproducible.

LLMs are intrinsically unfit for use in any situation where human life or health is at stake.

[–] Miaou@jlai.lu 1 points 2 hours ago

Everything is reproducible, unless you wire your computer to a radioactive source

[–] EvilBit@lemmy.world 6 points 21 hours ago

Exactly. People keep shoehorning Large Language Models into non-linguistic domains, and that’s dangerous. Human language, with respect to the training sets used, is inherently subjective and imperfect. Healthcare is very fault-intolerant.

[–] db2@lemmy.world 16 points 23 hours ago* (last edited 23 hours ago) (2 children)

The replacing part is the problem. Using a local system to help is fine, but it still requires humans who know what they're doing and what they're looking at.

[–] iopq@lemmy.world 2 points 21 hours ago (1 children)

Sometimes, for example human + AI systems used to be better than either one in isolation, but chess AI improved so much that the human partner is actually not helping anymore

[–] saimen@feddit.org 6 points 19 hours ago (1 children)

But chess is an isolated "system" with clear rules. Reality and especially medicine is so much more complicated.

[–] iopq@lemmy.world 1 points 4 hours ago

Chess strategy is extremely complicated and probably will never be completely solved. It will be almost solved like checkers eventually when programs will just draw vs. each other or a white win is found

But we will never actually simulate all games since the number of chess games dwarfs the number of atoms in the universe. So in that sense we will never know what the "correct" move is outside of table base or mate situations. Medicine may actually be less complicated to a machine.

Bu the only benchmark should be "how good the humans are at a task" since you're not trying to be perfect. You only have to provide better results than the current system.

[–] EvilBit@lemmy.world 1 points 21 hours ago* (last edited 21 hours ago)

It doesn’t replace any individual directly. It improves one person’s capability to the extent that there may be fewer needed to do a job. And that’s not a bad thing in my opinion, especially because it can improve the quality of that person’s work at the same time.

Edit to elaborate: I am opposed to replacing humans with AI in general. AI is a tool. But if that tool can empower someone to do more and better work, then I’m not opposed. Using stolen intellectual property to replace creatives with an inherently non-creative slop machine is greedy and evil. Using machine learning trained on medical data sets to let a radiologist more comprehensively and deeply review a frankly overwhelming amount of data to better save lives? I’m cool with that. But I also think that, in line with my stance that AI is a tool, there will likely be a well-trained human operating these tools for a long time before radiologists cease to exist.

[–] saimen@feddit.org 4 points 19 hours ago

The number of radiological examinations are steadily increasing so there won't be less radiologist needed but AI is needed to cope with the increasing workload.

AI has much better sensitivity than humans (finding something out of the norm) and humans have much better specificity (basically saying what a certain finding is). So I could imagine AI screening every examination and a radiologist just goes through the findings verifying them. For specific things this is already done for years (eg pulmonary nodules).

[–] Grandwolf319@sh.itjust.works 4 points 19 hours ago (1 children)

That assumes it’s done additively.

I think a lot of these AI automation promises come down to:

Are you adding a tool thereby increasing the overall quality of service and cost.

Or are you trying to reduce cost even if it means reducing service quality.

The first one doesn’t take any job away and makes everything just a bit better but more expensive.

The second one is a race to the bottom strategy that just comes down to capitalism doing its thing.

[–] EvilBit@lemmy.world 1 points 18 hours ago (1 children)

Too many billionaires are salivating over the latter.

[–] FauxLiving@lemmy.world 1 points 14 hours ago

Let them eat cake

[–] frongt@lemmy.zip 0 points 21 hours ago

If it's done properly, sure.

Last time this was in the news, they found that AI had an insanely good accuracy at identifying cancer! Until they realized it was because they included the hospital info in the training data, so it was identifying "cancer" by seeing they were at a cancer treatment facility.

[–] darkangelazuarl@lemmy.world 25 points 22 hours ago (1 children)

So the machine learning version of AI can be very useful for this because it can identity cancer or other issues earlier in some contexts because it looks for patterns that we don't see. But yeah this is just another tool that should be used with professionals in the field.

[–] saimen@feddit.org 7 points 19 hours ago* (last edited 19 hours ago)

Exactly. This "AI replacing humans" rhetoric is just marketing from the tech bros because otherwise they would only be selling very expensive tools which gives professionals a minor edge but mostly aren't worth the (high) costs (at the moment).

[–] Brewchin@lemmy.world 3 points 16 hours ago

Yet another example of "CEO said a thing".

[–] Sanctus@anarchist.nexus 10 points 23 hours ago

Dont listen to shit coming out of America. We're probably 85% of the reason why the world sucks.

[–] surfrock66@lemmy.world 7 points 23 hours ago (1 children)

Devil's advocate...this is one of the few good opportunities for ML. If you train a model on a specific dataset with expert validation, this has the opportunity to save lives.

First, radiology isn't one thing; different radiologists with different expertise looking at the same imaging can see different things. Second, there are not enough radiologists; my wife is an ER doctor who only does overnights and her hospital network has a central radiology center that reviews all films from all the hospitals, and it's always backlogged and waiting on results impacts outcomes in a real way. Third, there are simply human limits to what we can visually perceive, take a look at this study: https://pmc.ncbi.nlm.nih.gov/articles/PMC3964612/

Radiological ML models could change healthcare. Imagine a world where part of your annual preventative care you go and get a full body CT. The ML model can compare your CT with references in your demographics AND your prior years, and find changes/issues before they're crises. That's simply not an amount of data analysis that could be done by an army of radiologists, and has the opportunity to spot things like tumors or organ swelling way earlier. I get that the late stage capitalism reality is "they'll use the data to farm money out of you" but from an actual technological standpoint, this could have real life-saving and improving implications for a lot of people and removes a huge bottleneck in healthcare.

[–] EvergreenGuru@lemmy.world 8 points 23 hours ago (2 children)

Even if AI does the job of reading medical imaging extremely well, I’d still want a radiologist to double-check the scans.

[–] saimen@feddit.org 4 points 18 hours ago* (last edited 18 hours ago)

And it clearly would be necessary because even these higly sophisticated models would only look for what they are trained for and will have a lot of arbitrary/non relevant findings.

Also someone has to take responsibility. The moment software firms are willing to take full responsibility without disclaimers I will start to believe they might be able to replace some people.

[–] surfrock66@lemmy.world 2 points 23 hours ago (1 children)

I see it as bulk imaging goes through the ML, which flags things, orders additional more detailed targeted scans, and those elevate to radiologists.

[–] SpaceNoodle@lemmy.world 1 points 22 hours ago (1 children)

Instead, they're just gonna throw away the radiologists and make everything computer.

[–] FauxLiving@lemmy.world 0 points 14 hours ago

Anyone who does that will find themselves quickly out of business and bankrupt from lawsuits.

The headline is a fantasy, it's a tool that augments professionals in some situations. It doesn't replace them.

[–] thoralf@discuss.familie-will.at 4 points 21 hours ago (1 children)

Well, the management says so. I doubt that any trained physician would say the same.

[–] badgermurphy@lemmy.world 1 points 7 hours ago

The management will say something else when they realize their plan makes them uninsurable.

[–] sturmblast@lemmy.world 1 points 20 hours ago
[–] PityPityBangBang@lemmy.world -3 points 23 hours ago (3 children)

I remember a radiologists posting on reddit their income.

something like $800,000/year.

be sad to see that get taken from people that work that job

[–] Photonic@lemmy.world 4 points 20 hours ago

That’s just in the USA. This will affect doctors worldwide.

That being said, the work load on radiologists has been increasing year over year, so we need something to help take it off. In the UK it can sometimes take 3 months to get a report on a scan.

[–] ChaosMonkey@lemmy.dbzer0.com 2 points 22 hours ago

At least you would suppose they have some savings and own some property. Loosing your job when you live paycheck to paycheck is far worse.

[–] iopq@lemmy.world 0 points 21 hours ago (1 children)

They came for the poor, and being a radiologist I didn't speak out. They came for the millionaires and there was nobody to speak for me /s

[–] a4ng3l@lemmy.world 1 points 20 hours ago (1 children)

The things with AI is that it has yet to come for the poors…

[–] iopq@lemmy.world 1 points 4 hours ago (1 children)

Poor artists beg to differ

[–] a4ng3l@lemmy.world 1 points 2 hours ago

Fair point. It’s so sad to think about artists being poor to start with :-/