this post was submitted on 21 Dec 2025
231 points (97.1% liked)

Fuck AI

5195 readers
1044 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] jj4211@lemmy.world 3 points 3 weeks ago (1 children)

Ironically, withholding race tend to result in more racist outcomes. Easy example that actually came up. Imagine all you know of a person is that they were arrested regularly. If you had to estimate if that person was risky based on that alone with no further data, you would assume that person was risky.

Now add to the data that the person was black in Alabama in the 1950s. Then, reasonably, you decide the arrest record is a useless indicator.

This is what happened when a model latched onto familiar arrest records as an indicator about likely recidivism. Because it was denied the context of race, it tended to spit out racist outcomes. People whose grandparents had civil rights protest arrests were flagged.

That makes me think of how France has rules against collecting racial and ethnic data in surveys and the like, as part of a "colour blind" policy. There are many problems with this, but it was especially evident during the pandemic. Data from multiple countries showed that non-white people faced a significantly higher risk of dying from COVID-19, likely contributed to by the long-standing and well documented problem of having poorer access to healthcare and poorer standards of care once they actually are hospitalised. It is extremely likely that this trend also existed in France during the pandemic, but because they didn't record ethnicity data for patients with COVID-19, we have no idea how bad it was. It may well have been worse, because a lack of concrete data can inhibit tangible change for marginalised communities, even if there is robust anti-discrimination laws.

Link if you want to read more

Looking back at the AI example in your comment though, something I find interesting is that one of the groups of people who strongly believe that we should take race context into account in decision making systems like this are the racist right-wingers. Except they want to take it into account in a "their arrest record should count for double" kind of way.

I understand why some progressive people might have the instinct of "race shouldn't be considered at all", but as you discuss, that isn't necessarily an effective strategy in practice. It makes me think of the notion that it's not enough to be non-racist, you have to be anti-racist. In this case, that would mean taking race into account, but in a manner that would allow us to work against historical racial inequities