Settings

Theme

Signs that can reveal a fake photo (2017)

bbc.com

83 points by hiddencache 4 years ago · 54 comments

Reader

ChrisMarshallNY 4 years ago

I remember this, when it was first published.

Good article. One thing about fakes, is that they don't need to be super-high-quality, in many cases. They just need to be enough to reinforce a narrative to a receptive audience.

An example is that Kerry/Fonda fake. Just looking at it as a thumbnail on my phone, it was easy to see that it was a composite. Also, I have seen both photos, in their original contexts. They are actually fairly well-known images, in their own rights.

That didn't stop a whole lot of folks from thinking it was real. They were already primed.

The comment below, about using an AI "iterative tuner" is probably spot-on. It's only a matter of time, before fake photos, videos, and audio, are par for the course.

  • wussboy 4 years ago

    I used to think that the internet would be the greatest tool for human peace ever invented. But I was wrong. I didn’t know that humans don’t use reason to determine their position. The use it to justify their position and Google provides the greatest tool ever built for allowing people to do exactly that.

tinus_hn 4 years ago

https://twitter.com/JackPosobiec/status/1434581638923620360?...

These days you don’t even need to fake the photo, you can just attach the fake drama to a photo of something else and no one will bat an eyelid.

  • 1cvmask 4 years ago

    Rolling Stone didn't even produce a proper retraction (calling it an update) for their fake news piece from that picture caption on horse dewormer and hospitals being allegedly overwhelmed.

    https://twitter.com/ggreenwald/status/1434854957614780424

  • sluggosaurus 4 years ago

    All the mainstream reputable news organizations use stock photography extensively, so the public is generally willing to accept that the photograph attached to the headline needn't have anything to do with that headline.

    Personally, I think this practice should be ended or at least decreased greatly. An article about a ship doesn't a stock photograph of a ship. It probably doesn't even need a photograph about the particular ship the article is discussing, unless there is something visually unusual or notable about that ship. The formula "Ship A is late to port, here is a stock photo of Ship B" is basically worthless. I guess they're tossing a bone to the borderline illiterate who are stuck at the "I can read picture books" level of literacy? But generally articles themselves are written at a higher 'reading level' than that.

  • Wistar 4 years ago

    Snopes often refers to these as "miscaptioned."

    • 1cvmask 4 years ago

      Unfortunately Snopes miscaptions many things themselves. Where the fact checkers of the "fact checkers".

      • dahart 4 years ago

        Got some examples?

        • antifa 4 years ago

          No need to fact check, just declare vague criticism for snopes every chance you get, pretend like it's self evident, then disappear because evidence is far left propaganda.

        • Wistar 4 years ago

          None that I could find. I didn't know snopes was in the business of captioning at all.

hdm41bc 4 years ago

Is this a solvable problem by requiring camera manufacturers cryptographically sign photos and videos created on those devices? If that’s in place then it seems like it could be the basis for chain of custody of journalistic images backed by a blockchain. This seems like the only viable solution to me since any AI powered solution would just be a cat and mouse game.

  • kc0bfv 4 years ago

    In this scenario, it would almost certainly have to be that manufacturers would have to build cameras that cryptographically sign the images and videos. The cameras would have to be able to have that ability, install of the manufacturers doing the signing.

    And then what would the Blockchain provide in this case? A chain of cryptographically signed certificates back to a manufacturer is basically the same system we use on the web today TLS certs. No Blockchain required.

    And a major problem with that system is making sure the camera only signs genuine images. A nation state actor, or even a large political operation, is going to have an incentive to bypass the protections on that camera - perhaps just driving what the CCD is telling the rest of the camera - so they can produce signed fakes.

    That's if they can't just get the private key off the camera, perhaps through a side channel attack - which can be pretty tough to pull off but is very tough to really defend against. Get a private key, game is over for the fraudster.

    • hdm41bc 4 years ago

      The way I thought that the blockchain would be employed is to use it to track transformations of the image. Post-processing, adding captions, and what not. This would provide an audit trail of changes to the original source image.

      If, in fact, we can’t reliably sign the source image as authentic, then the rest of the system falls apart. It seems like this is the crux of the problem.

      • someguyorother 4 years ago

        That seems to be a DRM problem. Let's say that you want the camera to track all modifications of the picture. Then, analogous to DRM, there's nothing stopping the forger from just replacing the CCD array on the camera with a wire connected to a computer running GIMP.

        To patch the "digital hole", it would be necessary to make the camera tamperproof, or force GIMP to run under a trusted enclave that won't do transformations without a live internet connection, or create an untamperable watermark system to place the transform metadata in the picture itself.

        These are all attempted solutions to the DRM problem. And since DRM doesn't work, nor would this, I don't think.

        • chasil 4 years ago

          If a signed sha256 is somehow attached in the exif data, it can be removed.

          What digital rights are there to manage? This would be a statement of authenticity, not proliferation control.

          The vendor's private key would have to be stored in the device. How could it be protected from extraction?

    • grumbel 4 years ago

      > And then what would the Blockchain provide in this case?

      The main thing a blockchain provides is a cryptographically secured logbook of history. It doesn't guarantee you that the entries in the logbook are true, but it gets a lot harder to fake history when you can't go back to change your story. You have to fake it right when you claim it happened and hope that nobody else records anything in the logbook that conflicts with your story.

      • kc0bfv 4 years ago

        I can see how then a journalist source could use this to help prove their integrity. And I like that as a solution for that...

        But - I don't really see that as the issue today. Those outlets that are interested in lying don't have to participate in this Blockchain chain of proof system. The malicious entities like political groups cited in the article definitely don't have to participate. It's still really on the viewer/spreader of the fake images/misinformation to verify the images, and to only rely on verifiable images. But I think a system like that would leave out most of the population who simply don't care.

        Perhaps my worry about leaving out that chunk of population means this problem is unsolvable - and therefore my point is unfair. But I do think we need some solutions that are practical for widespread acceptance and use. If I can't imagine my parents (who are tech literate) would participate, and can't imagine some of my non-nerd friends wanting to participate, I don't think it solves the problems I'd like systems like this to solve.

        • hdm41bc 4 years ago

          I don’t think most people need to adopt this on their cameras for it to work. My perspective here is that journalistic sources that want to be trusted could employ this system. Along with signing the media and the blockchain, a system would need to be built to simply show the change log and history of a photo from the source. These journalism websites could just link out to it to demonstrate their veracity.

          Once that’s adopted by the NYTs, WSJs, BBCs of the world, I’m hoping there would be critical mass to pressure more and more journalistic sources to adopt this standard. Eventually, any credible journalism would be on this technology and any outlet that doesn’t use this would be viewed with a grain of salt.

          I agree though that a number of developments would have to happen to make this a reality. I would think that a partnership between NYT and Apple or Nikon could kickstart it though.

    • kkielhofner 4 years ago

      The problem with using certificates is any media signed by a party (by nature) traces directly back to that source/certificate. With a certificate-based approach I can imagine something like Shodan meets Google Image Search being used to make it easier to source media for the purposes of enhancing training for an ML model. Needless to say I have serious concerns about this approach.

      This is why our approach only embeds a random unique identifier in the asset and requires a client to extract the media identifier to verify integrity, provenance, etc.

      There are also two problems at play here - are we trying to verify this media as being as close to the source photons as possible, or are we trying to verify this is what the creator intended to be attributable to them and released for consumption? The reality is everyone from Kim Kardashian to the Associated Press performs some kind of post-sensor procession (anything from cropping, white balance, etc to HEAVY facetunning, who knows what).

      • kc0bfv 4 years ago

        Ok - I like this for some use cases. To restate my understanding so you can tell me I'm wrong if I am:

        I think that it's still the user's job to make sure that they are skeptical of the provenance of any photos that claim to be from, say, the NY Times, that are not viewed in the NYT's viewer (if they were using your system). And then, they should still trust the image only as far as they trust the NYT. But if they're viewing the image the "right" way they can generally believe it's what the NYT intended to put out.

        And perhaps, over time, user behavior would adapt to fit that method of media usage, and it would be commonplace.

        I am skeptical that that "over time" will come to pass. And I think that users will not be apply appropriate skepticism or verification to images that fit their presuppositions. And I think malicious players (like some mentioned in the article) will attempt to build and propagate user behavior that goes around this system (sharing media on platforms that don't use the client, for instance).

        And I guess making that broad set of problems harder or impossible is really what I'd like to solve. I can see how your startup makes good behavior possible, and I guess that's a good first step and good business case.

  • amelius 4 years ago

    This might lead into a direction we don't want to go. E.g. camera manufacturers can add DRM so you can't copy photos and movies, fingerprinting for CSAM, etc.

    Just give me the raw image sensor.

    • antifa 4 years ago

      I can totally see someone trying to set this up, then instead of any of the benefits actually working as advertised, photography costs $80 in etherium per photo.

  • PeterisP 4 years ago

    Assuming that media and consumers will want to consider photos/videos of random everyday people, it would require that:

    1. All manufacturers, including manufacturers of shoddy but cheap mass-market devices (ones that a not-wealthy person would have on them to document interesting events) support that cryptographic signing in all their devices;

    2. None of the signing keys/secrets can be ever extracted from any such devices;

    3. None of these manufacturers or their employees ever generate a valid key (or a million valid keys) that would have been put in a camera of the same model that respected journalists use, but are just available to the government where the factory resides, or just for sale on some internet forum to sign whatever misinformation a resourceful agent wants to publish.

    Signing pictures can mostly work with respect to a limited set of secure, trusted hardware manufactured and delivered with a trusted chain of supply, where a single organization is in charge of the keys used and the set of keys is small enough to control properly. E.g. Reuters might use it to certify photos taken by Reuters people using specific Reuters-controlled camera hardware (and they can do that just by ordinary signing of what they publish). But there's no motivation for most people in the world to accept that overhead for the devices they use for photography and video, and there's no single authority to control the keys that everybody else would trust due to international relations.

  • MayeulC 4 years ago

    I was speaking with someone from the military. It seems that's more or less required in some cases for interrogations, taking pictures of proofs, etc. With time-stamping and GPS coordinates using purpose-built cameras.

    I can easily imagine the camera digitally signing pictures and asking for notarization. But there will always be an analog hole -- and the first faked pictures weren't altered after shooting, the scene was.

    I'm all for fakes being widespread. It makes people more critical of what they see, and protects them against the few that had this capability before.

  • mindslight 4 years ago

    No. "Trusted" hardware merely creates the illusion of a secure system, while allowing those with the resources to defeat it anyway. First, there would be 20 years of bugs after having to root your camera became a thing. Two, unless the sensor modules themselves are made into trusted components, it would be relatively easy to wire up a mock sensor to the "secure" processor. And three, camera makers would eventually be pressured to undermine the system's security invariants, ala Apple.

  • tsimionescu 4 years ago

    Wouldn't filming a good quality screen with a higher refresh rate than the camera FPS defeat this method entirely? Especially so if the desired result is not itself high-def.

  • wussboy 4 years ago

    It is solvable by punishing anyone who posts fake pictures. Since the problem of bad actors in society has existed for millennia, we anyway know a dozen ways to deal with it. We just haven’t really bothered to apply any of them to the Internet.

    Why we haven’t done that is a different but equally fascinating question.

  • kkielhofner 4 years ago

    Exactly. In fact, that's the approach[0] taken by my new startup.

    [0] https://www.tovera.com/tech

hk-im-ad 4 years ago

With GANs, any fake image detection technique you could derive based on visual data could probably be learned by the discriminator given the right architecture choice.

  • Cthulhu_ 4 years ago

    It's an interesting rat race to see; you could make a fake image using AI, then a fake image detector software, then connect the two until the AI generates images no longer recognizable as fake.

    • hk-im-ad 4 years ago

      This is how a GAN works. There is a generator and a discriminator. The generator tries to fool the discriminator, and the discriminator tries to detect images generated by the generator. As one gets better, so does the other, until progress converges.

    • hwers 4 years ago

      Just a comment: This usually isn't how GAN progress is made. I haven't really seen any GAN that incorporates advances in 'fake detection' directly into its discriminator. Usually it's just GANs getting more data and using smarter math and 'fake predictors' following the development by some 6-12 months.

    • DonHopkins 4 years ago

      Since so many people don't care about the truth, and LARP that they believe fake news and fake images just own the libs, maybe there's some money to be made by selling them fake fake-image-detector software like Hotdog Or Not: one app that always claims a photo is real, and another app that always claims a photo is fake. Or a single app with in-app-purchases that lets them pay to choose whatever they want to believe!

      https://www.theverge.com/2017/6/26/15876006/hot-dog-app-andr...

z5h 4 years ago

I tried to prove that crops which do not preserve photographic centre are detectable https://physics.stackexchange.com/a/367981/3194

This was after photographers seemed to not believe this was the case https://photo.stackexchange.com/q/86550/45128

In any case, detecting cropped photos could be a way to detect that something has been intentionally omitted after the fact.

  • jobigoud 4 years ago

    But cameras aren't perfect pinholes though. The center of the sensor and the optical axis of the lens are already not perfectly aligned, and the sensor is not necessarily perpendicular to the lens barrel. These distortions might be larger than any change of size due to an object being in the periphery of the field of view, especially for longer focal length where the rays are more parallel.

    • z5h 4 years ago

      Correct. So, in theory it works and in practice it works with limitations. You could probably create a lens/camera calibration profile that could take the practical use further.

open-source-ux 4 years ago

There are also misleading photos - not fake images but a more subtle attempt to manipulate viewers.

A mundane example: You're browsing a property website, look through the pictures, and then visit a property only to discover the rooms are tiny matchbox-sized spaces. They looked so much more spacious when you viewed them online. You're just discovered wide-lens-photography for real estate - purposely distorts or make a space look spacious.

A 'fake' news example: During the coronavirus lockdown, a Danish photo agency, Ritzau Scanpix, commisioned two photographers to use two different perspectives to shoot scenes of people in socially-distance scenarios. Were people observing the rules? Or did the type of lens (wide-angle and telephoto) intentionally give a misleadling impression?

The pictures are here - the article is in Danish, but the photos tell the story:

https://nyheder.tv2.dk/samfund/2020-04-26-hvor-taet-er-folk-...

kkielhofner 4 years ago

It's been really interesting to see another recent uptick in media (and HN) coverage of deepfakes, modified media, etc lately.

There are virtually endless ways to generate ("deepfake") or otherwise modify media. I'm convinced that we're (at most) a couple advancements of software and hardware away from anyone being able to generate or otherwise modify media to the point where it's undetectable (certainly by average media consumers).

This comes up so often on HN I'm beginning to feel like a shill but about six months ago I started working on a cryptographic approach to 100% secure media authentication, verification, and provenance with my latest startup Tovera[0].

With traditional approaches (SHA256 checksums) and the addition of blockchain (for truly immutable and third party verification) we have an approach[1] that I'm confident can solve this issue.

[0] https://tovera.com

[1] https://www.tovera.com/tech

  • kc0bfv 4 years ago

    But this only works if all views of the images employ your client... If I download an image (screenshot if necessary), modify it, and host it myself, how does the system work then?

    And, unless all users trust only things viewed securely, and distrust things viewed nonsecurely (out of your client), then misinformation and fake photos can still propagate, right? (Or, how does the system handle this?)

  • tsimionescu 4 years ago

    Block chain can at best give you chain-of-custody, but it can't help with the real problem - the original source. Trusting the original source requires, well, trust, so a block chain really adds very little to the solution.

    • kkielhofner 4 years ago

      In our implementation the source/creator of the media is verified as well. I think of our approach as "Twitter blue checkmark meets certificates" (sort of). Of course a user can always take media and re-upload it via any number of ways but they can't do so as any other user. One of our next steps is to have identity verification by way of social media accounts and Stripe identity or another identity verification platform.

      The primary verification source is our API that interacts with a traditional data store. Blockchain only serves to add additional verification that we (or anyone else) isn't modifying or otherwise tampering with our verification record.

thesz 4 years ago

Camera's often have non-linear radial image distortions. For example, OpenCV's camera calibration process computes these radial distortions along the way [1]. They may not be very significant, but they exist.

[1] https://docs.opencv.org/master/dc/dbb/tutorial_py_calibratio...

Aligning points on a photo outside of more-or-less linear center region will certainly result in crossing lines. Which we see in the alignment attempt there in the article - the points we align are close to center and close to edge (max distortion).

There is no mention of distortions in the entire article.

But some other points are interesting to think about.

JacKTrocinskI 4 years ago

Pretty sure Benford's law applies to images and can be used as a tool to detect fakes.

  • MauranKilom 4 years ago

    Interesting idea, but it's not obvious to me in what way you would apply it.

    For example, compositing two images that somehow obey Benford's law should result in something that also obeys it.

    Maybe you mean "Benford's law" as in "just general stastical properties", but I hope you had something more specific in mind.

dang 4 years ago

Discussed at the time:

Signs that can reveal a fake photo - https://news.ycombinator.com/item?id=14670670 - June 2017 (18 comments)

hwers 4 years ago

I wonder why the article has the title '20170629-the-hidden-signs-that-can...' in the url. That date would suggest it's from the 29th of june 2017 (while the date below the headlines says 2020), way before the current breakthroughs of deep fakes.

  • commoner 4 years ago

    Nice catch. The article was republished. Here's an archive from 2019 of the original 2017 version:

    https://web.archive.org/web/20191030232152/https://www.bbc.c...

    • hwers 4 years ago

      I'll be honest and admit that in taking a second look I noticed that they totally admit to this being a republished article from earlier: "[...] BBC Future has brought you in-depth and rigorous stories to help you navigate the current pandemic, but we know that’s not all you want to read. So now we’re dedicating a series to help you escape. We’ll be revisiting our most popular features from the last three years in our Lockdown Longreads.[...]"

      • commoner 4 years ago

        I didn't see that, either. My eyes glossed right over the italics.

      • dang 4 years ago

        Still kind of dodgy not to include the original date, and to display today's date in exactly the same way that a publication date is usually displayed.

  • dang 4 years ago

    We've added the year to the title now. Thanks!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection