this post was submitted on 01 Mar 2025
45 points (97.9% liked)

World News

1084 readers
732 users here now

Rules:

  1. Be a decent person
  2. No spam
  3. Add the byline, or write a line or two in the body about the article.

Other communities of interest:

founded 7 months ago
MODERATORS
 

Police in 20 countries have taken down a network that distributed images of child sexual abuse entirely generated by artificial intelligence.

The operation – which spanned European countries including the United Kingdom as well as Canada, Australia, New Zealand – is "one of the first cases" involving AI-generated images of child sexual abuse material, Europe's law enforcement agency Europol, which supported the action, said in a press release.

Danish authorities led an operation, which resulted in 25 arrests, 33 house searches and 173 devices being seized.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 months ago

Even then it's not "just as problematic" because unlike the production of the training data itself, the images it outputs don't require any further abuse. It's definitely a moral gray zone but I don't think anyone would seriously argue that producing AI pictures of this sort is just as bad as taking pictures of actual abuse.

However, what I believe to be the logic here is that these people have likely trained their own models based on their own database of images, so while the authorities are going after people based on AI pictures, it's not the AI pictures that they're really after.