this post was submitted on 11 Feb 2026
295 points (98.7% liked)

Technology

81026 readers
5586 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.

you are viewing a single comment's thread
view the rest of the comments
[–] Paranoidfactoid@lemmy.world 45 points 17 hours ago (4 children)

How do these AI models generate nude imagery of children without having been trained with data containing illegal images of nude children?

[–] AnarchistArtificer@slrpnk.net 38 points 16 hours ago

The datasets they are trained on do in fact include CSAM. These datasets are so huge that it easily slips through the cracks. It's usually removed whenever it's found, but I don't know how this actually affects the AI models that have already been trained on that data — to my knowledge, it's not possible to selectively "untrain" models, and they would need to be retrained from scratch. Plus I occasionally see it crop up in the news about how new CSAM keeps being found in the training data.

It's one of the many, many problems with generative AI

[–] RedGreenBlue@lemmy.zip 8 points 17 hours ago

Can't ask them to sort that out. Are you anti-ai? That's a crime! /s

[–] Senal@programming.dev 3 points 16 hours ago

Easy answer is , they don't

Though that's just the one admitting to it.

A lightly more nuanced answer is , it probably depends, there's likely to be some inference made between age ranges but my guess is that it'd be sub-par given that it sometimes struggles with reproducing images it has a tonne of actual data for.

[–] calcopiritus@lemmy.world 0 points 12 hours ago (3 children)

Tbf it's not needed. If it can draw children and it can draw nude adults, it can draw nude children.

Just like it doesn't need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.

[–] Paranoidfactoid@lemmy.world 1 points 4 hours ago (1 children)

I don't think so. Speaking as a parent.

[–] calcopiritus@lemmy.world 1 points 3 hours ago (1 children)

What you don't think?

Why does being a parent give any authority in this conversation?

[–] Paranoidfactoid@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

I have changed diapers and can attest to the anatomical differences between child and adult, and therefore know AI cannot extrapolate that difference without accurate data clarifying these differences. AI would hallucinate something absurd or impossible without real image data trained in its model.

[–] WraithGear@lemmy.world 9 points 12 hours ago* (last edited 12 hours ago) (1 children)

that’s not true, a child and an adult are not the same. and ai can not do such things without the training data. it’s the full wine glass problem. and the only reason THAT example was fixed after it was used to show the methodology problem with AI, is because they literally trained it for that specific thing to cover it up.

[–] Jarix@lemmy.world 3 points 10 hours ago (1 children)

I'm not saying it wasnt trained on csam or defending any AI.

But your point isn't correct

What prompts you use and how you request changes can get same results. Clever prompts already circumvent many hard wired protections. It's a game of whackamole and every new iteration of an AI will require different methods needed bypass those protections.

If you can ask it the right ways it will do whatever a prompt tells it to do

!You can't tell it to make a nude image of a child, I assume, but you can tell it make the subject in the image of the last prompt 60% smaller and adjust it as necessary to make it believable.!< That probably shouldnt work but I don't put anything passed these assholes.

It doesn't take actual images/data trained if you can just tell it how to get the results you want it to by using different language that it hasn't been told not to accept.

The AI doesn't know what it is doing, it's simply running points through its system and outputting the results.

[–] MathiasTCK@lemmy.world 1 points 8 hours ago

It still seems pretty random. So they'll say they fixed it so it won't do something, all they likely did was reduce probability, so we still get screenshots showing what it sometimes lets through.

[–] slampisko@lemmy.world 2 points 11 hours ago (1 children)

That's not exactly true. I don't know about today, but I remember about a year ago reading an article about an image generation model not being able, with many attempts, to generate a wine glass full to the brim, because all the wine glasses the model was trained on were half-filled.

[–] calcopiritus@lemmy.world 1 points 11 hours ago (1 children)

Did it have any full glasses of water? According to my theory, It has to have data for both "full" and "wine"

[–] vala@lemmy.dbzer0.com 2 points 10 hours ago (1 children)

Your theory is more or less incorrect. It can't interpolate as broadly as you think it can.

[–] calcopiritus@lemmy.world 1 points 4 hours ago

The wine thing could prove me wrong if someone could answer my question.

But I don't think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I've seen images on the internet of things similar to that generated by LLMs.

Who is to say interpolating nude children from regular children+nude adults is too wild?

Furthermore, you don't need CSAM for photos of nude children.

Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.