this post was submitted on 11 Apr 2025
43 points (95.7% liked)

Solarpunk

6450 readers
10 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 3 years ago
MODERATORS
 

I'm finding it harder and harder to tell whether an image has been generated or not (the main giveaways are disappearing). This is probably going to become a big problem in like half a year's time. Does anyone know of any proof of legitimacy projects that are gaining traction? I can imagine news orgs being the first to be hit by this problem. Are they working on anything?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 2 days ago (3 children)

Perhaps a trusted certificate system (similar to https) might work for proving legitimacy?

[–] [email protected] 11 points 2 days ago (1 children)

Certificates like that can only guarantee that the work was published by someone who is the person they claim to be, it can't verify how that content came to be in their possession.

[–] [email protected] 1 points 1 day ago (1 children)

Hmm, I see. Surely wouldn't that be enough if they proclaimed they would only sign real photographs though?

[–] [email protected] 1 points 1 day ago

Anyone can make such a promise. Verifying that they have followed through with it is not a technical challenge, it's a socioeconomic issue.

[–] [email protected] 2 points 2 days ago

https://contentauthenticity.org/how-it-works

The page is very light on technical detail, but I think this is a system like trusted platform modules (TPMs), where there is a hardware root of trust in the camera holding the private key of an attestation certificate signed by the manufacturer at the time of manufacture, and it signs the pictures it takes. The consortium is eager for people to take this up ("open-source software!") and support showing and appending to provenance data in their software. The more people do so, the more valuable the special content-authenticating cameras become.

But TPMs on PCs have not been without vulnerabilities. I seem to recall that some manufacturers used a default or example private key for their CA certificates, or something. Vulnerabilities in the firmware of a content-authenticating camera could be used to jailbreak it and make it sign arbitrary pictures. And, unless the CAI is so completely successful that every cell phone authenticates its pictures (which means we all pay rent to the C2PA), some of the most important images will always be unauthenticated under this scheme.

And the entire scheme of trusted computing relies on wresting ultimate control of a computing device from its owner. That's how other parties can trust the device without trusting the user. It can be guaranteed that there are things the device will not do, even if the user wants it to. This extends the dominance of existing power structures down into the every-day use of the device. What is not permitted, the device will make impossible. And governments may compel the manufacturer to do one thing or another. See "The coming war on general computation," Cory Doctorow, 28c3.

What if your camera refused to take any pictures as long as it's located in Gaza? Or what if spies inserted code into a compulsory firmware update that would cause a camera with a certain serial number to recognize certain faces and edit those people out of pictures that it takes, before it signs them as being super-authentic?

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago)

Camera companies have been working on this. They have been trying to create a system that makes it possible to detect if an image has been tampered with https://www.lifewire.com/camera-makers-authentication-prevent-deepfakes-8422784

However this signature probably just uses assymetric encryption which could mean that the signing key on the device could be extracted and abused.