[Also on Medium]
[ Update 3 Feb 2018: added two new creations at the bottom of this post. Last one turned out really well ]
I’ve first heard of deepfakes a good week ago. Thanks Twitter. Thanks Tim Soret.
There’s something called Deepfakes on internet, and it’s the most cyberpunk shit you can imagine. Machine learning used to swap porn actresses faces with hollywood stars. Obviously NFSW.
— timsoret 👁 (@timsoret) January 23, 2018
Yes, it’s pretty damn cyberpunk. But from a superficial point of view, /r/deepfakes (extremely NSFW! You have been warned) consists of people using an app created by user “deepfakes” to create fake celebrity porn.
This has caused a shitstorm on the Internet, media discussing the legality of it all, websites taking down the deepfake creations, and people panicking as they realise AI is going to screw us all up (newsflash: it’s already been happening in much less obvious ways). And meanwhile, Nicolas Cage is taking over Hollywood.
While everyone’s debating whether this is good or bad, I just had to find out more. First thing that came into my mind? How can I apply this to everyone I know (in a non-porn way, in case you wondered).
How does it work?
The deepfakes app is a deep learning algorithm that learns how to reconstruct faces. Give it a bunch of pictures, let it run a few hours, and it spits out fuzzy copies of those images. Do note, it doesn’t create a copy. It learns what a face looks like, in different expressions, and is able to output that face solely based on that. There’s a detailed explanation on Reddit but let me try and dumb it down.
Think of it like this: imagine if you could stare at someone for 12 hours straight, observing all their expressions and absorbing that into your brain. Then that person asks you to sketch his face on paper, smiling, crying, any expression you’ve observed. What do you do? You immediately generate a photographic quality sketch on paper, from the mind! (using a pencil)
While that’s pretty cool, it only gets better. See that “encoder” part? The FakeApp uses one encoder for all faces. It’s the decoder that’s kept face specific. And here comes the really cool part. Let it learn two faces, and things become more interesting.
Right, now see how this works. The encoder takes an image of a face, let’s it run through its “brain”, and the decoder spits it out again. In the example above, it can do so with faces of Anne Hathaway, and Elke, my wife. Ok, so far so good. But now let’s feed it a picture of Anne, but use the decoder that generates Elke’s face!
You just created a new photo of Elke. A photo that never existed, in the same angle, the same expression, as Anne! Amazing!
Sure, putting celebrities’ faces on your favorite porn stars is an interesting use case. But we can leverage these celebrities for other things, such as inserting your friends and family into blockbuster movies and shows!
For the best result, you must first find an actor/actress that has a similar head shape as the person you wish to insert. In case of Elke (my wife) I accidentally noticed, while watching the Dark Knight Rises, that Anne Hathaway might be a good fit. I guess you know Anne, so here’s Elke:
All I needed was about 850 photos of Elke, a few 1000 of Anne, a lot of computing time, et voila: Elke’s on the Tonight Show starring Jimmy Fallon.
Bonus effect: now we know what Elke looks like with short hair :D
Here’s a little comparison gif:
I personally think it’s fun, can be innocent, and even makes for a nice surprise/gift. Remember, any tool can be used for evil. And as long as we’re not banning guns, this should not be a high priority, amirite?
There’s so much you can do with this technology. You know those dumb emails people send around where they replaced dancing elves heads with their own, or even worse, yours? Well, now you can put your best friend into his favourite movie: have her dance with Patrick Swayze and have the time of her life, or have an alien burst out of his stomach. It’s all within your reach now!
Beyond just pure fun, I can only imagine how people will start turning this tech into business ideas. Fashion will be huge (what would I look like with this kind of hair, this kind of dress…), fitness could be interesting (do I look good with muscles, will I really look better skinny), travel (this is you standing on a beach is going to be quite convincing). It’ll bring advertising to a whole new level. No need to imagine what if, they’ll tell you what your “better” life will look like! And it’ll be hard to get that picture out of your head…
Update: in the mean time, I’ve created two more. Elke’s a huge fan of Steve Carell, and I suddenly realized Anne Hathaway co-starts with him in Get Smart. First attempt was okay:
Then I wanted to try this one (original video):
And I think it turned out great:
35 thoughts on “Family fun with deepfakes. Or how I got my wife onto the Tonight Show”
Thank you for focusing on a family side of this. The world seems to be up in an uproar about the use of this stuff for porn but it can be used for so much more interesting and powerful stuff. Great article too I’m a programmer with experience in machine learning and I had never found a good comprehensive description of how the deepfakes worked (never really dug too deep but anyway) and I appreciate the easy breakdown of that it matches one face to another and that’s how it works.
By the way, your wife looks good with short hair!
Thanks, glad you liked it! It’s amazing stuff, but media just enjoys focusing on the negative side of things, uproar results in more clicks, more reactions etc, right? There’s so much positive and interesting ways of using this, I think we have exciting times ahead of us ;)
And I’ll tell her, thanks :D
Yes! Thank you for showing a wholesome application of deepfaking. I fully intend to focus on positive applications on my site https://www.deepfakes.club… will try to send some traffic your way!
If you want to focus your site on positive applications of this tech, the worst way to do it would be by taking the moniker of the person who used it for porn.
Because in the very near future, that person’s name will become the byword for “fake porn generated by artificial intelligence” – indeed, that is likely the case currently (definitely among porn consumers, at a minimum).
While utilizing the moniker can help you “ride the wave” of popularity of the term, no one searching for more practical or positive uses of the tech are likely to click through to your website, because they will likely think that it contains more faked porn.
Lastly – from a legal standpoint the person behind the deepfakes moniker could sue you for copyright infringement. It isn’t likely – and depending on where you are located and they are located, the chances could go down greatly from there (to the point of it being likely not to happen at all) – but it is something to think about from a branding perspective.
So you have two things working against you – both related to the name “deepfakes”:
1. It brings to mind for consumers an instant connection to porn
2. To the extent that it matters, your usage of the moniker “deepfakes”, without having a connection to the original owner of that moniker, marks your brand in a negative light
The only case where #2 would be invalid would be if you are the actual “deepfakes” – in which case you are then doubly fighting for #1.
The best solution would be to come up with a different brand name, of course…
Cool! We need that!
Seems I need to fix threaded comments on this blog :D
Incredible work. Fun use for something that has gotten a bit of a bad reputation
Hi Sven. I’m also trying to work with this to see how it works (a family fun/friendly way). I just had a couple questions. 1) What were your hyperparameters (batch, layer, nodes). 2) I noticed you mentioned about 850 photos of your wife, and then 1000s of Anne…how many 1000s? And is that usually the case where images of the face you’re swapping out will be that significantly more amounts of images in training than the face swapping in? 3) How long did you have to train? I’ve been reading to wait until loss is at least under 0.2, but I’m still noticing that the images are still fuzzy. So I’m just curious as to how long to get higher quality output.
Your output on the Jimmy Fallon is really good. So I’m trying to get that same quality.
Hey, sorry for the late reply. I’ve used the basic settings, except I doubled the nodes. I think for Anne I used about 2-3k photos, mostly extracted from videos, interviews, where the lighting is alike. I spent a few days training it, adding and removing pictures. It went towards 1.2 for Anne, 1.8 for my wife. I think it’s important that it learns both faces well, but the target a bit better. Make sure you get pictures of similar lighting in both, as that helps a lot (dark images were useless)
I love your website blog. Great post and guideline for those who are new to this topic. Thanks for sharing an interesting post with us.
creepy as hell