this post was submitted on 23 Mar 2025
91 points (88.2% liked)

Futurology

2446 readers
73 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 38 points 2 weeks ago (12 children)

In the medical industry, AI should stick to "look at this, it may be and you must confirm it." Any program that says "100% outperforms doctors" is bullshit and dangerous.

[–] [email protected] 6 points 2 weeks ago (7 children)
[–] [email protected] 13 points 2 weeks ago

Because, even today, you can’t and will never have a 100% reliable answer.

You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).

This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.

ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.

load more comments (6 replies)
load more comments (10 replies)