Settings

Theme

Deepfake-busting apps can spot even a single pixel out of place

technologyreview.com

10 points by mkm416 7 years ago · 12 comments

Reader

gus_massa 7 years ago

The title is very misleading. They store some fingerprints of some photos in their server and then they compare them with the fingerprint of the viral images. If the viral image is a variant of the image previously submitted they can spot the difference, even a single pixel difference. (I don't know if the algorithm is robust enough to survive cropping, brightness correction, and other modification that are allowed.) (You can get a single pixel difference detection with MD5, CRC32, sha1, … the difficult part is not overdetection.)

They can't get an unknown image and classify it as real or deepfake.

gearhart 7 years ago

This appears to be deceptive marketing.

The technology that's being discussed here is just taking a hash of the image at the point when it's created and using a third party service and standard cryptography to authenticate it.

You can be sure that the image was taken using a company's app because the image hash was signed with their key when it was taken, and you can be sure that it hasn't changed since then because they know that a photo with that exact signature was once taken with their app.

Now, let's be honest, that's almost certainly the most sensible way to counter fake imagery - if you want someone to believe a photo is real, prove who took it and when using the same technology that's used to secure your bank account.

However, the implication in all of these overly-hyped articles about the products is that it's some kind of "war of the robots" in which they're trying to train software to spot changes and hollywood-esque "inconsistencies".

That's really unhelpful for two reasons: - it makes anyone even vaguely technical deeply distrust anything they say, since it's not a process that could come out with a reliable product (any ML algorithm like this can be gamed, and the whole point here is absolute reliability) - it implies that this process can be used on any photo and so you could identify that something was fake if it just appeared on the internet, that's not the case here

Here are the two companies, for reference: https://truepic.com https://www.serelay.com

edit: gus_masa made the same point rather more concisely

Yaa101 7 years ago

The problem is not debunking deep fakes, I am sure this can be done with algorythms easy in the long term. The problem of any fakes is that it does not need that much sophistication to convince the majority of people that have a lower than average IQ and set them up to do a whitchhunt. This has proven to work since the time we people roam this earth. The problem is that the big public can only be convinced of fakery months to years after the hunted has died or the reputation was killed, and even then a part of the public will keep on believing false memes and myths. I think we never will able to find a solution for that as long as we people exsist.

nathan_long 7 years ago

Proving the legitimacy of a photo or video is a great problem to try to solve in the age of "fake news" accusations and extremely powerful editing software.

  • Arubis 7 years ago

    While I agree in principle, I don’t think merely having this technological capability is sufficient. At this point, unfortunately, accusations of “fake news” seem to be blindly believed without verification even if facts are clearly available and easily accessible.

    • nathan_long 7 years ago

      True. People can always find a reason to justify their belief or disbelief if they really want to.

3pt14159 7 years ago

This is really cool, but I have a question. If we go back to using actual film does this help at all? I can imagine cameras and recorders for things like war capturing two copies; one digital, one film. If there are unfakeable or hard to fake artifacts in film one could imagine the recordings could validate each other. It would also make good insurance against cyber attack from illiberal governments.

  • hsk0823 7 years ago

    Film can be "faked". Negatives can be modified. They're all chemical processes that can be coaxed into producing images that didn't exist when the photo was taken.

ajnin 7 years ago

Even if this technology can spot fakes, humans can't, that the whole point and danger of such lifelike fakes. People are used to react to images in a certain way, I'm sure a large part of which is hardwired in our brains, we aren't going to start checking cryptographic signatures for everything we see

sharemywin 7 years ago

Couldn't you use a GAN to build a more sophisticated generator?

resters 7 years ago

wouldn't a GAN be able to tune it until it passes the deep fake detection test?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection