this post was submitted on 11 Apr 2025
43 points (95.7% liked)
Solarpunk
6450 readers
10 users here now
The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.
Join our chat: Movim or XMPP client.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Much of these problems can be solved by introducing a signature chain:
In this example, "Company A" can be a reliable news source, and "Company B" could be an aggregator like Mastodon or Facebook. So long as the chain is intact, the viewer can decide whether they trust every element in the chain and therefore trust the image.
This even allows people to use AI for responsible editing, because you're attacking the real problem: the connection between the creator (in whom you may or may not vest a certain amount of trust) and the media you're looking at.
I think you might be assuming that most of the problems I listed are about handling the trust of the software that made each modification - in case you just read the first part of my comment. And I'm not sure if changing the signature to a chain really addresses any of them besides having a bigger "hit list" of companies to scrutinize.
For reference, the issues I listed included:
There are plenty of issues with how even a trusted piece of software allows you to edit the picture, since trusted software would need to be able to distinguish between a benign edit and one adding AI. I don't think a signature chain changes much since the chain just increases the number of involved parties that need to be vetted without changing any of the characteristics of what you are allowed to do.
I think the main problem with the signature chain is that is that the chain by itself doesn't allow you to attribute and particular part to and party in the chain. You will be able to see all the responsible parties but not have any way of telling which company in the chain could be responsible for signing a modification. If the chain contains Canon, gimp, and Adobe, there is no way to tell if the AI added to the image was because the canon camera was hacked or if gimp or Adobe has a workaround that allowed someone to replace the image with an AI one. I think in the case of a malicious edit, it makes less sense to allow the picture to retain the canon signature if the entire image could be changed by Adobe, essentially putting Canon's signature reputation on the line for stuff they might not be responsible for.
This would also bring a similar problem to the one I mentioned where there would need to be a level of trust for each piece of editing software - and you might have a world where gimp is out because nobody trusts it, so you can say goodbye to using any smaller developers image editor if you want your image to stay verified. That could be a nightmare if providers such as Facebook or others wanted to use the signature chain to prevent untrusted uploads, it would penalize using anything but Adobe products for example.
In short I don't think a chain changes much besides increasing the number of parties you have to evaluate complicating validation, without helping you attribute malicious edit to any party. And now you have a situation where gimp for example might be blamed for being in the chain when the vulnerability was from Adobe or Canon. My understanding of the question is that the goal is an automatic final determination of authenticity, which I think is infeasible. The chain you've proposed sounds closer to a "web of trust" style system where every user needs to create their own trust criteria and decide for themselves what to trust, which I think defeats the purpose of preventing gullible people from falling for AI images.
I think you're misunderstanding the purpose behind projects like c2pa. They're not trying to guarantee that the image isn't AI. They're attaching the reputation of the author(s) to the image. If you don't trust the author, then you can't trust the image.
You're right that a chain isn't fool-proof. For example, imagine if we were to attach some metadata to each link in the chain, it might look something like this:
| Author | Type | |
|
| | Alice the Photographer | Created | | AP photo editing department | Cropping | | Facebook | Resizing/optimisation |
At any point in the chain, someone could change the image entirely, claim "cropping" and be done with it, but what's important is the chain of custody from source to your eyeballs. If you don't trust the AP photo editing department to act responsibly, then your trust in the image they've shared with you is already tainted.
Consider your own reaction to a chain that looks like this for example:
| Author | Type | |
|
| | Alice the Photographer | Created | | AP photo editing department | Cropping | | Infowars | Cropping | | Facebook | Resizing/optimisation |
It doesn't matter if you trust Alice, AP, and Facebook. The fact that Infowars is in the mix means you've lost trust in the image.
Addressing your points directly:
I think you are misunderstanding my mention of C2PA, which I only mentioned offhand as an example of prior art when it comes to digital media provenance that takes AI into account. If C2PA is indeed not about making a go/no-go determination of AI presence, then I don't think it's relevant to what OP is asking about because OP is asking about an "anti-ai proof", and I don't think a chain of trust that needs to be evaluated on an individual basis fulfills that role. I also did disclaim my mention of C2PA - that I haven't read it and don't know if it overlaps at all with this discussion. So in short I'm not misunderstanding C2PA because I'm not talking about C2PA, I just mentioned it as an interesting project that is tangentially related so that nobody feels the need to reply with "but you forgot about C2PA".
I think you are glossing over the possibility that someone uses Photoshop to maliciously edit a photo, adding Adobe to the chain of trust. If instead you are suggesting that only individuals sign the chain of trust, then there is no way anyone will bother looking up each random person who edited an image (let alone every photographer) so they can check if it's trustworthy. Again I don't think that lines up with what OP is asking for. In addition, we already have a way to verify the origin of an image - just check the source AP posting an image on their site is currently equivalent to them signing it, so the only difference is some provenance, which I don't think provides any value unless the edit metadata is secured as I mention below. If you can't find the source then it's the same as an image without a signature chain. This system can't doesn't force unverified images to have an untrustworthy signature chain so you will mostly either have images with trustworthy signature chains that also include a credit that you can manually check or images without a source or a signature. The only way it can be useful is if checking the signature chain is easier than checking the website of the credited source, which if it requires the user to make the same determination I don't think it will move the needle besides making it marginally easier for those who would have checked for the source anyway to check faster.
I disagree, the entire idea of the signature chain appears to be for the purpose of identifying potentially untrustworthy edits. If you can't be sure that the claimed edit is accurate, then you are deciding entirely based on the identity of the signatory - in which case storing the edit note is moot because it can't be used to narrow down which signature could be responsible for an AI modification.
The thing about this is that if you trust AP to be honest about their edits, then you likely already trust them to verify the source - this is something they already do so it seems the rest of the chain is moot. To use your own example, I can't see a world where we regularly need to verify that AP didn't take the image that was edited by Infowars posted on facebook, crop it, and sign it with AP's key. That is just about the only situation where I see the value in having the whole chain, but that's not solving a problem we currently have. If you were worried that a trusted source would get their image from an untrusted source, they wouldn't be a trusted source. And if a trusted source posts an image where it gets compressed or shared, it'll be on their official account or website which already vouches for it.
The difference with TLS is that the malicious parties are not in ownership of the endpoints, so it's not at all comparable. In the case of a malicious photographer, the malicious party owns the hardware to be exploited. If the malicious party has physical access to the hardware it's almost always game over.
Yes and this is exactly the problem, it comes down to whether you trust the photographer, meaning each user needs to research the source and make up their own mind. The system would have changed nothing from now, because in both cases you need to check the source and decide for yourself. You might argue that at least with a chain of signatures the source is attached to the image, but I don't think in practice that will change anything since any fake image will lack a signature just as how many fake images are not credited. The question OP seems to be asking is about a system that can make that determination because leaving it up to the user to check is exactly the problem we currently have.
This relies on everyone maintaining the chain though which there is nothing to force them into doing so.
Absolutely, but that's not really the point. If you remove the chain, then the file becomes untrusted. We're talking about attaching trust to an image, and a signature chain is how you ensure that that trust.
Couldn't you just start a new chain though from any point?
Yes, but starting a new chain would necessarily reallocate the ownership. So if
reuters.com
created a real image and then Alex Jones modified it, stripped the headers, and then re-created them, then the image would no longer appear to be from Reuters, but rather frominfowars.com
.