It is always rather awe inspiring when researchers demonstrate fake, i.e., synthesized, video of celebrities and political figures that is nearly impossible to identify as not being genuine. See below an example deepfake video of the former president of the United States of America, Barack Obama,
The dangers of fake videos created using recent advances in deep learning and their application to image and video synthesis are now being felt not only by public figures but also the rest of us.
We didn’t really care too much if a fake video of a politician or a fake porn video of a celebrity was published online. We thought about it for a few seconds and then moved on. Now an article in the Washington Post makes it painfully clear that dangerous individuals using video or image data the average person publishes on social media, e.g., Facebook, Twitter, Instagram, etc., can create much angst to a lot of us.
It used to be that we trusted video because it was much too difficult to fake convincingly as compared to single images that were easy to manipulate using professional image editing software such as Photoshop. Recent advances in deep learning and generative adversarial networks in combination with large amounts of digital data such as image and video has made it possible to create fake videos that are hard to identify as such. And if today it is still possible to identify such fakes, give it another couple of years and it might no longer be the case.
There will be lots of people put into very difficult positions if this technology gets out of control. Eventually, deepfakes will probably become illegal and creators will be prosecuted accordingly but this doesn’t mean that they will go away. Your best line of defense would be to lock down your social media profiles and give access only to people you truly trust. Better yet, why not just stop constantly posting pictures of yourself online and just enjoy yourself in the company of friends and family!