What Is a Deepfake?

Deep Fakes


Deepfake technology has evolved from hobbyist experimentation to an important and potentially dangerous instrument since its first appearance in 2018. Here is what it is, and how it is being used. Don't believe every single video that you see!

How Do Deepfakes Work?

Deepfake systems have different ways of working. Many transfer an actor's facial movements to a target video, such as the one we saw at the beginning of this post, or this deep-faced Obama created by comedian Jordan Peele to warn of the threat of fake news. Many deep-fakes map a target person's face onto many videos—for example, this video of Nicolas Cage's face mapped to that of characters in various films.

Deepfakes, like most modern AI-based applications, use deep neural networks, a type of AI algorithm that is especially good when it comes to finding patterns and correlations in large data sets. When it comes to computer vision, the computer science division and the AI that handles visual data, neural networks have proven particularly good.

Deepfakes uses a special form of neural-network structure called an "auto encoder." Auto encoders consist of two parts: an encoder that compresses an image into a small amount of data; and a decoder that decompresses the compressed data back into the original picture. The mechanism is similar to those of image and video codecs such as JPEG and MPEG.

But unlike conventional encoder / decoder software that works on pixel classes, the auto encoder works on image features such as shapes, objects, and textures. A well-trained auto encoder can go beyond compression and decompression, and carry out other tasks—say, generate new images, or remove noise from grainy images. Educated on facial images, the facial features are taught by an auto-encoder: eyes, nose, mouth, eyebrows etc. Deepfake applications use two auto encoders— one trained on the actor's face, and the other trained on the target's face.

What Makes Deepfakes Special?
Eepfake technology is not the only type that can swap faces in images. But this has been done by the VFX (visual effects) industry for decades. Until deepfakes, however, the technology was limited to deep-pocketed film studios that had access to ample technical resources.

Deep-fakes have democratized the power to change video images. The technology is now open to anyone with a decent processor machine and a powerful graphics card (such as the Nvidia GeForce GTX 1080), or who can spend a few hundred dollars on renting cloud computing and GPU services.
That said, it is neither trivial nor completely automated to build deepfakes.

The Dangers of Deepfakes
For your favorite movies, making enjoyable instructional videos and custom casts are not the only uses of deep-fakes. AI-doctored videos have a darker side, far more common than their optimistic and innocuous uses.

Shortly after the release of the first deep-fake software, Reddit was flooded with fake pornographic videos featuring celebrities and politicians. The advancement of other AI-powered technologies, in conjunction with deepfakes, has made it possible not only has to fake the face but also virtually anyone voice.

With reports about how social media algorithms facilitate the dissemination of false information, the threat of a fake-news crisis caused by deep-rooted technology has become a serious concern, especially as the US is preparing for the presidential elections in 2020. US senators highlighted deep-fakes as a threat to national security, and held numerous hearings on potential technology abuse to influence public opinion by disinformation campaigns. So we've seen a raft of legislative legislation banning deep-fakes so keeping accountable the individuals who produce and spread them.

The Fight against Deepfakes
Scientists have been creating new strategies for detecting deep-fakes only to see them becoming ineffective as the technology continues to grow and deliver more natural outcomes. As the presidential elections in 2020 come to a close, big tech firms and government agencies have worked to combat the spread of deep-fakes.

In September, a competition was launched by Facebook, Microsoft and several universities to build tools that can detect deep-fakes and other videos with AI-doctors. As well, DARPA, the Defense Department's research arm, has initiated an effort to counter the spread of deep-fakes and other digital misinformation assaults. In addition to detecting physician videos and images, DARPA should look for ways to promote the attribution and identification of the parties involved in fake media production.

Leave a comment

Please note, comments must be approved before they are published