How to detect Deepfakes

8 min read Original article ↗

Being online in 2025 means seeing deepfakes and other AI-generated content every day. Some of it is harmless - funny animal videos and obvious memes. Some is dangerous because it’s provocative or engineered to trigger a (negative) reaction.

It’s very hard to correctly detect every deepfake we encounter. Many are harmless and don’t matter. It’s fine if that cute dog wasn’t real: no real consequence. But when something triggers a strong reaction in you, or seems designed to provoke others, we need to pay closer attention. Luckily, there are some reliable signs that a video or image was generated by AI (often called AI slop). I’m not always on alert: being online would be unbearable if we constantly had to scrutinize everything. But whenever something surprises me or seems controversial, either to me or to others, I look for these signs.

Below is a list of the main cues I watch for. Like any technique, they aren’t perfect, but if you find one or more, there’s a good chance the content is fake. Keep in mind that models are getting better - within a few years it may be nearly impossible to spot deepfakes without forensic tools. Fortunately, we’re not there yet!

The alert signs

First, a couple of alert signs. If you see them, double-check what you’re looking at.

  • Always look for watermarks - they’re a dead giveaway. Certain GenAI tools, like Sora2, add watermarks to their content (see the image below).
    Sample watermark from Sora2.

    Sample watermark from Sora2.

  • Be suspicious of low resolution content. Many of the signs below require zooming in and examining small details. One way to hide these flaws is to intentionally lower the resolution. As a rule of thumb, be skeptical of low resolution content.
  • Be suspicious of perfection. People are imperfect, and so is nature. If anything looks “too perfect” or “too cute”, check again and see if you notice other signs.

What to look for

Below is a list of common artifacts I look for. You usually want to find more than one - unless the artifact is extremely obvious, like a six-finger hand or a clear physical impossibility. Often the clues are subtle: a slightly warped object, an “unnatural” movement, and so on. This is why low-resolution content is perfect for hiding deepfakes: the artifacts are small (and in videos, often brief), and compression can easily mask them.

Pay attention to secondary details

Most AI tools do a great job generating the main focus of an image (or video), but often miss secondary details like backgrounds and minor objects. Common artifacts include deformed people, distorted body parts (especially hands), incomplete or oddly shaped objects, and other “nonsensical” elements. A frequent source of errors is the boundary between different objects or body parts.

Here are a few examples:

Notice the hands on this deepfake of Emmanuel Macron.

Notice the hands on this deepfake of Emmanuel Macron.

Two big giveaways: the watch’s hour marks are completely wrong (related to the next point, nonsensical text), and the spoon the woman is holding is incomplete.

Two big giveaways: the watch’s hour marks are completely wrong (related to the next point, nonsensical text), and the spoon the woman is holding is incomplete.

Be especially mindful of blurred backgrounds with no detail, particularly in photos. Not all blurred backgrounds imply a deepfake, of course, but when real backgrounds contain many details, AI-generated ones often reveal artifacts. You can see this in action on This Person does not exist and in the two faces below.

Notice how the background behind the man makes the fakeness easier to spot: it looks like two different backgrounds were pasted together. The plain background behind the woman gives us no clues. There are other signs, though: the woman’s ear and hair don’t merge cleanly at the top, and on the man, the necklace appears only on one side of his neck and the lower part of the right lens of his glasses is missing.

Notice how the background behind the man makes the fakeness easier to spot: it looks like two different backgrounds were pasted together. The plain background behind the woman gives us no clues. There are other signs, though: the woman’s ear and hair don’t merge cleanly at the top, and on the man, the necklace appears only on one side of his neck and the lower part of the right lens of his glasses is missing.

Look at text (and font)

I don’t mean text overlaid on the image or video. I mean text on objects or walls. Models are much better at this than before, but they still sometimes generate odd-looking text: inconsistent fonts (look for words where the same letter appears multiple times but looks different), text that changes between frames in a video, nonsensical words, overlapping characters, etc.

Multiple artifacts here: the ramp on the left, the oddly shaped car and building, etc. But look at the text too: besides making no sense, the font is inconsistent. For example, the base of the last L in HELLLOO is shorter than the first two, and the Ds in PPPDUD don’t match.

Multiple artifacts here: the ramp on the left, the oddly shaped car and building, etc. But look at the text too: besides making no sense, the font is inconsistent. For example, the base of the last L in HELLLOO is shorter than the first two, and the Ds in PPPDUD don’t match.

In this example, many characters are nonsensical. Older deepfake models often produced text like this, and it can still be spotted today.

In this example, many characters are nonsensical. Older deepfake models often produced text like this, and it can still be spotted today.

Think about the physics of the scene

This is especially useful for videos, but it applies to images too.

If motion looks unrealistic - objects moving strangely, passing through each other, or deforming in impossible ways (e.g., legs bending backward) - you’re likely looking at AI-generated content. If supposedly static objects shift or change between frames, that’s another clue.

Notice how the guy moves, especially his legs, and how the mummy falls. This kind of unnatural movement is characteristic of current models.

Even in images, check whether poses and support make sense. In the example below, the mother bird isn’t standing on the branch - you only see the chicks’ feet. She can’t be flying, since her wings are covering them, so she has nothing to stand on. That means the image is fake.

The mother bird isn’t standing on the branch, and she isn’t flapping her wings either - she’s effectively floating. Also note how the feathers merge into a single blob where the wings meet.

The mother bird isn’t standing on the branch, and she isn’t flapping her wings either - she’s effectively floating. Also note how the feathers merge into a single blob where the wings meet.

You don’t need to own a Tesla to see the charger is plugged into the wrong place. Also notice how the text, especially on the plate, resembles the bad-text examples above.

You don’t need to own a Tesla to see the charger is plugged into the wrong place. Also notice how the text, especially on the plate, resembles the bad-text examples above.

Other good indicators include inconsistent lighting or shadows, and odd clothing folds (especially sleeves, edges, and buttons).

Multiple giveaways: the shirt buttons are oddly shaped (zoom in), the area around the coat pocket is deformed, and the folds on the doctor’s right arm look unnatural.

Multiple giveaways: the shirt buttons are oddly shaped (zoom in), the area around the coat pocket is deformed, and the folds on the doctor’s right arm look unnatural.

As with everything else on this list, models keep improving and deepfakes keep getting more realistic. One small artifact could be due to how the photo was taken, but if you spot several small defects, chances are the image is generated.

Cartoonish colors/textures

This one is hard to describe, but once you’ve seen it a few times, it becomes easier to notice. Something about the colors, lighting, or texture can look “off”.

I can’t explain it, but the bear’s color immediately triggered my deepfake alarm. Something about its color is off. The mountain and tire also look unusually “flat”.

I can’t explain it, but the bear’s color immediately triggered my deepfake alarm. Something about its color is off. The mountain and tire also look unusually “flat”.

On its own, I wouldn’t rely on this sign, but if you notice it, start looking for other artifacts.

Beware of morphing objects

This is an artifact to watch for in videos. As an AI model generates a clip, it sometimes needs a few frames to settle on an object’s appearance. Because of this, objects may “flicker” or change shape, especially around the edges. Pay close attention to background objects and text, as they can flicker too. You may also notice these artifacts when an object moves fully or partially out of frame and comes back.

Watch the hitch connecting the Tesla and the trailer. Can you see it morph during the clip?

Practice: Some examples

Check the videos below and see if you can spot the signs of generated content. Watch in full screen to make it easier to notice artifacts.

Example 1

View Solutions

Two main artifacts:

  1. The candy and the spider web-morph during the video.
  2. The man’s fall is completely unnatural.

Example 2

View Solutions
  1. The garage door falls far too easily and in a very unnatural way.
  2. The badger jumps sideways but lands facing the camera. Because of the video’s low resolution, this is a weak sign as it could be an artifact from video compression.
  3. There’s a Sora watermark, but that one’s too easy :)

Example 3

Video from here.

View Solutions

Trick question! This video is real :D

It’s good to be skeptical, but don’t “overfit” and start seeing fakes everywhere.

Example 4

View Solutions

This one is very hard. The camera is far away and the details are tiny. Still, a few giveaways:

  1. A shadow moves from t=1s to t=3s near the bench by the trees, yet no one could be casting it.
  2. A person appears out of nowhere at t=14s.
  3. The waves move oddly, almost in “slow motion”.
  4. The hillside houses “flicker” or wiggle slightly.

These are subtle! A worrying trend for us deepfake spotters…

Summary

As GenAI content becomes more realistic, it will get harder to separate fact from fiction. I hope the tips above give you an edge and encourage you to be more suspicious of what you see online. AI is lowering the cost of fabricating content: fake news has always existed, but creating and spreading it used to be much harder. If something feels off, look closely, and when in doubt, be skeptical - especially if the content plays into your biases.

If you enjoyed this post, please share it with your friends and family. It would mean a lot to me!

Useful resources