AI Face/Off — Fawkes vs. NicOrNot

5 min read Original article ↗

Testing Facial Recognition Evasion on Nic Cage

Gant Laborde

I’m sure you’ve seen the news. Company after company is trying to identify who you are with facial recognition. Clearview AI is scraping the web for photos of you, and AI companies are scrambling to build recognition algorithms that can identify you, even under your masks! The debate is still raging for what is public information and what is crossing the line beyond personal privacy. However, you don’t have to wait for the jury to start protecting your face today.

Press enter or click to view image in full size

Graphic from arXiv:1907.06724v1

In the facial recognition dystopia of 2020, heroes who are skilled in the AI ways are also fighting back. If you haven’t heard of Fawkes, it’s a privacy protection app by SANDLab, University of Chicago that specifically modifies your personal images so humans can recognize you, but AI cannot. The app is named Fawkes, after Guy Fawkes, which has become the face of Anonymous.

This software claims to provide a new image that is imperceptibly different and confounding to state of the art facial recognition systems.

Can you tell the difference between the “Cloaked” image and the original?

BUT, you might ask yourself, how do we know this actually works? And that’s where it gets fun. We need an AI facial recognition “before and after”. Of course, we’ll be using some of the most useful AI on the market: NicOrNot.com — a website that answers the age-old question, “Is the famed actor Nicolas Cage in any given photo?”

Press enter or click to view image in full size

“Surely you can’t be serious!” I am, and stop calling me Shirley. With this (useful) online tool, we can take a photo of Nic Cage that has guaranteed detection, then take that exact photo, run it through Fawkes, and see what happens.

To do this, I’m taking one of the de-facto Nic Cage moment photos that always tests positive for Nic, and I’m going to download the Fawkes July 2020 v0.3 software package available on the official website:

Press enter or click to view image in full size

https://sandlab.cs.uchicago.edu/fawkes/#code

The software is quite simple. You select an image, or several images for Fawkes to run on, and each image takes about 2 minutes to “cloak”.

Press enter or click to view image in full size

“Give me the results already, Gant!”

Armed with a cloaked image of Nic, we can now upload the image to Nic or Not and verify if Fawkes does what it claims!

🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁🥁

Press enter or click to view image in full size

BOOM! Hidden like a National Treasure!

Fawkes has successfully fooled NicOrNot.com! So let’s nerd out for a minute here. What’s actually happening? Is the AI detecting his face but is failing to recognize who it is? Or, is it missing his face all-together?

Normal AI perturbation (fancy word for adding smart noise to mess up AI), messes up specific pixels to make AI misclassify an image all-together. For instance, you can properly add perturbation to an image to make AI think a photo of a dog is actually a boat. This generally takes a good bit of work (more than 2 minutes) but is extremely effective.

Get Gant Laborde’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

So my first guess would be, Fawkes stops AI from seeing anything, even a face. This method would cause scraping AI companies to have to overcome this issue, but then once they can box your face, they are right back in business identifying who you are. They might even be able to flag you as having protected your photos by posting photos with no detectable faces.

However, that’s not what I found with Fawkes!

I ran three facial detection models on the “cloaked” image and all three found Nic’s face. All of them identified his face and facial landmarks with extreme confidence! They just… well… were ever-so-slightly different!

AI Facial features cloaked vs not cloaked

That’s really impressive! Not only is the image completely imperceptibly changed to a human, but it’s also not even obvious it has been cloaked. This raises the question, “how durable is this cloak?”

I saw the file-size of the Fawkes image was pretty big compared to the original image, and that worried me for a moment.

What’s the cost of this optimization beyond 2 minutes? Is the file larger because it was efficient, or when this image is resampled/downsampled, will it will fail to protect!? I was pretty sure I knew the answer but 🧪 FOR SCIENCE! I had to check

Press enter or click to view image in full size

I squeezed the cloaked image back down to 32 KB (92% file squeeze matching the original image) using an online tool called Squoosh.app.

Press enter or click to view image in full size

squoooooosh

And then I popped the squooshed image into Nic or Not and it evaded detection like a champ! Wow! Nicely done Fawkes!

From the published paper on the research behind Fawkes, this algorithm works 95%+ and even if a company uses these images to train a more advanced network, Fawkes still can protect you against that AI 80%+ of the time! What an awesome breakthrough!

Thanks Open Source! Thanks University of Chicago! and Thanks Fawkes for protecting our privacy!

Press enter or click to view image in full size

Photo from Ahmed Zayan Unsplash