this post was submitted on 03 Feb 2026
471 points (95.0% liked)

Technology

80634 readers
3401 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

https://archive.ph/HiESW

you are viewing a single comment's thread
view the rest of the comments
[–] ch00f@lemmy.world 2 points 2 hours ago (1 children)

I think it's critically important to be very specific about what LLMs are "able to do" vs what they tend to do in practice.

The argument is that the initial training data is sufficiently altered and "transformed" so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case. Whether or not it's easy to access is irrelevant. The fact that the people performing the study had to "jailbreak" the models to get past checks tells you that the model's creators are very aware that the model is very capable of producing an un-transformed version of the copyrighted work.

From the end-user's perspective, if the model is sufficiently gated from distributing copyrighted works, it doesn't matter what it's inherently capable of, but the argument shouldn't be "the model isn't breaking the law" it should be "we have a staff of people working around the clock to make sure the model doesn't try to break the law."

[–] FauxLiving@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago) (1 children)

The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case.

We know that the current case law on the topic, which has been applied in the specific case of training a model on copyrighted material, including books is that training a model on copyright material is 'highly transformative'.

Some models are capable of reproducing the majority of some books, after hundreds or thousands of prompts (not counting the tens of thousands of prompts required to defeat the explicit safeguards preventing this exact kind of copyright violation), as long as you make the definition of 'reproduce' broader (measuring non-contiguous matching, allowing near edits, etcetc).

Compare that level of 'copyright violation' vs how the standard in Authors Guild v. Google, Inc was applied. In that case Google had OCR'd copies of books and allows (it is still a service that you can use now) users to full-text search books and it will return you a sentence or two of text around the search term.

Not 'kind of similar text that has some areas where the tokens match several times in a row', an exact 1:1 copy of text taken directly from a scan of the physical book. In addition, the service also has high quality scans of the book covers as well.

Google's use was considered highly transformative and it gives far more accurate copies of the exact same books with far less effort than a language model which is trained, in many cases, to resist doing the very thing that Google Books has been doing openly and legally for a decade.

LLMs don't get close to this level of fidelity in reproducing a book:

Google

vs

LLMs

[–] ch00f@lemmy.world 1 points 1 hour ago (1 children)

Interesting. Didn't know about the google books case. I agree that it applies here.

[–] FauxLiving@lemmy.world 1 points 52 minutes ago

The case against Meta, where they 'lost' the copyright claim, was one of the biggest cases recently where Authors Guild v. Google was used. The judge dismissed one of the complaints (about training) while citing Authors Guild v. Google. Meta did have to pay for the books, but once they paid for the books they were free to train their models without violating copyright.

Now, there are some differences so the litigation is still ongoing. For example, one of the key elements was that Google Books and an actual book fulfill two different purposes/commercial markets so Google Books isn't stealing market share from a written novel.

However, for LLMs and image generators this isn't as true so there is the possibility that a future judge will carve out an exception for this kind of case... it just hasn't happened yet.