Transfin.
HomeNewsGuidesReadsPodcastsVideosTech
  1. Reads
  2. Deep Dives

What is Deepfake Technology? What are the Dangers of Deepfakes?

Editor, TRANSFIN.
Nov 2, 2020 5:44 AM 5 min read
Editorial

In 2018, a video emerged of Barack Obama in a profanity-filled public service announcement against politicians Ben Carson and Donald Trump. The former US President also argued that the Black Panther villain Killmonger was “right” about his plan for world domination.

 

 

Of course, Obama never said any of those words. It was actually a fake video produced by comedian Jordan Peele to educate the public about the rising dangers of deepfake technology, something one American senator called the modern equivalent of nuclear weapons.

 

What are Deepfakes?

The word is a portmanteau of “deep learning” and “fake”. Deepfakes are manipulated media where a video, image or audio clip is tampered with and edited such that the resulting product looks or sounds realistic but is not.

Consider the below GIF. It's a scene from the movie Man of Steel where actress Amy Adams in the original (left) is modified to have the face of actor Nicolas Cage (right). If viewed on its own, the latter looks eerily realistic, as if Cage was dressed like a woman and wearing a wig.

 

Deepfake example.gif

This is a relatively harmless example of deepfake tech. Now consider similar extrapolations but of politicians calling for violence, a public official warning of a false military attack, or a fake sex video being “leaked” to destroy someone’s marriage.

With technology such as this, the possibilities are endless - and dangerous.

BTW: The word “deepfake” was coined by a Reddit user in 2017 who posted doctored pornographic videos of female celebrities on a subreddit of the same name (it was banned a year later). These videos swapped the faces of celebrities like Gal Gadot, Taylor Swift and Scarlett Johansson on to those of porn actors.

 

What’s the Technology Behind Deepfakes?

To perform a face-swap video, thousands of face shots of the two faces are fed to an AI algorithm called an “encoder”, which learns the similarities between the two faces and stores them in a compressed format. A second AI algorithm called a “decoder” is then used to extract information about each face from the encoder. Each decoder is trained to extract information about only one of the faces.

So to swap the faces, al you have to do is feed a compressed image of person A’s face into the decoder trained on person B. Thus, the decoder reconstructs the face of person B with the expressions of A. This is repeated frame-by-frame for more convincing results.

Another way of generating deepfakes is through something called Generative Adversarial Networks (GANs), which use two machine learning models. One model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The process continues until the second machine learning model can't detect any forgery.

In both these methods, the “quality” of the end-product depends on the size of the data inputted.  The proverbial “data is oil” applies here too. This is why the first generation of deepfakes chiefly involved politicians and celebrities - people on whom there was already a lot of video footage online.

 

Who Do Deepfakes Target?

Overwhelmingly, the victims of deepfake technology are women.

AI firm Deeptrace found 15,000 deepfake videos online in September 2019 (FYI: A near doubling over nine months). A startling 96% of this flood of fake content was pornographic in nature, 99% of which involved faces from female celebrities mapped on to porn stars.

There is also the damaging trend of revenge porn - where spiteful exes use deepfake applications to map women’s faces in porn videos and upload them online. In June 2019, a downloadable Windows and Linux application called "DeepNude was released. It used GANs to remove clothing from any images of women. The app was taken down the same month after a flurry of online protest.

 

What’s Been Done by Governments to Control Deepfakes?

India doesn’t have an explicit law banning deepfakes. But the Right to Privacy is a constitutionally guaranteed Fundamental Right. And the Personal Data Protection Bill 2019 - if passed - could make the usage and circulation of non-consensual deepfakes a punishable offense.

Presently, Sections 67 and 67A of the IT Act provide punishment for publishing sexually explicit material in electronic form. There’s also Section 500 of the Indian Penal Code with regard to defamation. But concrete legislation explicitly dealing with deepfakes would undoubtedly be far more effective in curbing the menace.

The US government passed the Deepfakes Accountability Act in 2019 mandating deepfakes to be watermarked for the purpose of identification. Different US states have their own laws too. Virginia has a law on revenge porn, California prohibits creation and distribution of deepfakes about any political candidate 60 days prior to an election, and Texas considers fabrication of deceptive videos with the intent to influence an outcome of an election an offence.

Similarly, some other countries like Canada and the UK have passed their own safeguards against deepfakes.

 

What’s Been Done by Tech Companies to Control Deepfakes?

YouTube, TikTok and Facebook have each pledged to remove manipulated videos that may pose a serious risk of egregious harm or are misleading. Pornhub said in 2018 that it would strive to delete all deepfake videos on its platform. Twitter says it labels as “false” any photos or videos that have been significantly and deceptively altered or fabricated, and outright removes them if they threaten to cause immediate harm.

 

The Fakery Century

No matter how strong the technology or how concrete the law, deepfakes aren’t going anywhere anytime soon. The tech is already out there, available for anyone to use and deploy.

What’s more, the technology is improving, becoming more advanced and more difficult to distinguish from the truth. The timeless adage “Seeing is believing,” no longer holds universal relevance.

And the problem is escalating. As of June 2020, deepfake videos identified online doubled to 49,081 in just six months since January.

Will the rise and proliferation of deepfakes create a dystopian future where we won’t be able to tell fact from fiction? Time will tell. But as technology advances, we will surely find it increasingly challenging to ascertain the validity of a video or image online.

Invariably, we will have to take everything we come across with more than a pinch of salt. As a thumb rule, follow the Buddha’s evergreen and timeless advice:

Never believe everything you see on the internet!

FIN.

The cut-throat world of Business and Finance means that there is fresh News everyday. But don't worry, we got you. Subscribe to our Wrap Up Newsletter and get commentaries like the above straight to your inbox.