Generate Faces that Trick Facial Recognition with GANs in PyTorch

Mike Chaykowsky
7 min readApr 14, 2020

My goal here is to create an easily adaptable framework to generate faces that look realistic, but also trick a facial recognition classifier. The example we will work through is the task of generating realistic faces that always classify as your face — despite them not being your face (or anyone’s face for that matter).

This is actually a tricky task because it involves updating the generator in two ways.

  1. Update the generator to make realistic images
  2. Update the generator adversarially to classify as your face

As you might expect, this will require two loss functions to update simultaneously. And if you thought updating a GAN was already a delicate procedure…you have not seen anything yet.

You can imagine the update as a kind of push and pull between the two loss functions at each iteration. In one update the generator updates its weights to create more realistic faces, but if it tries to update too far in this direction it will start making faces that do not trick the facial recognition classifier. The same is true in the opposite direction, if the generator starts making images simply to trick the facial recognition classifier it might find some garbled mess of pixels that make the classifier think it’s you but doesn’t look like a face anymore.

The trick comes with keeping track of the gradient steps at each update. And for this we will need to make use of hooks in PyTorch.

In a previous article we have already developed a facial recognition classifier that recognizes your face — this model is called model_ft . We will then use an out-of-the-box DCGAN from PyTorch to make our generator.

Imports and data construction is the same as in the tutorial. I will walk through any changes in the code here, but the goal is to discuss the gradient updates rather than GANs in general.

class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size…
Mike Chaykowsky

Data Scientist at Rivian. x- RAND Researcher. Based out of Los Angeles. @ChaykowskyMike